Test Report: Docker_Linux_crio_arm64 22168

                    
                      9b787847521167b42f6debd67da4dc2d018928d7:2025-12-17:42812
                    
                

Test fail (46/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.29
44 TestAddons/parallel/Registry 14.12
45 TestAddons/parallel/RegistryCreds 0.53
46 TestAddons/parallel/Ingress 145.36
47 TestAddons/parallel/InspektorGadget 6.26
48 TestAddons/parallel/MetricsServer 5.38
50 TestAddons/parallel/CSI 41.93
51 TestAddons/parallel/Headlamp 3.12
52 TestAddons/parallel/CloudSpanner 5.3
53 TestAddons/parallel/LocalPath 9.38
54 TestAddons/parallel/NvidiaDevicePlugin 5.39
55 TestAddons/parallel/Yakd 6.28
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.25
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.33
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.02
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.42
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.41
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.74
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.35
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.29
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.07
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.7
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 2.33
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.43
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.64
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 2.18
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.12
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 120.03
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.56
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.27
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.27
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.25
273 TestMultiControlPlane/serial/RestartSecondaryNode 464.64
279 TestMultiControlPlane/serial/RestartCluster 477.07
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 6.37
281 TestMultiControlPlane/serial/AddSecondaryNode 86.99
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 6.03
293 TestJSONOutput/pause/Command 1.9
299 TestJSONOutput/unpause/Command 2.12
358 TestKubernetesUpgrade 794.38
384 TestPause/serial/Pause 7.09
436 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 7200.076
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable volcano --alsologtostderr -v=1: exit status 11 (288.038692ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:31:44.502629 1143419 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:31:44.504264 1143419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:31:44.504315 1143419 out.go:374] Setting ErrFile to fd 2...
	I1217 00:31:44.504339 1143419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:31:44.504802 1143419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:31:44.505177 1143419 mustload.go:66] Loading cluster: addons-219291
	I1217 00:31:44.505637 1143419 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:31:44.505679 1143419 addons.go:622] checking whether the cluster is paused
	I1217 00:31:44.505829 1143419 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:31:44.505863 1143419 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:31:44.506412 1143419 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:31:44.526636 1143419 ssh_runner.go:195] Run: systemctl --version
	I1217 00:31:44.526692 1143419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:31:44.544643 1143419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:31:44.643924 1143419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:31:44.644017 1143419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:31:44.675413 1143419 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:31:44.675439 1143419 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:31:44.675444 1143419 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:31:44.675448 1143419 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:31:44.675451 1143419 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:31:44.675455 1143419 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:31:44.675458 1143419 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:31:44.675462 1143419 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:31:44.675466 1143419 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:31:44.675473 1143419 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:31:44.675476 1143419 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:31:44.675479 1143419 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:31:44.675482 1143419 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:31:44.675487 1143419 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:31:44.675495 1143419 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:31:44.675501 1143419 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:31:44.675505 1143419 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:31:44.675513 1143419 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:31:44.675516 1143419 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:31:44.675519 1143419 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:31:44.675524 1143419 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:31:44.675527 1143419 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:31:44.675530 1143419 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:31:44.675533 1143419 cri.go:89] found id: ""
	I1217 00:31:44.675583 1143419 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:31:44.691752 1143419 out.go:203] 
	W1217 00:31:44.694743 1143419 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:31:44.694779 1143419 out.go:285] * 
	* 
	W1217 00:31:44.708809 1143419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:31:44.711600 1143419 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.505447ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007115318s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004311121s
addons_test.go:394: (dbg) Run:  kubectl --context addons-219291 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-219291 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-219291 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.598080199s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 ip
2025/12/17 00:32:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable registry --alsologtostderr -v=1: exit status 11 (256.577263ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:09.076044 1144345 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:09.076915 1144345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:09.076962 1144345 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:09.076985 1144345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:09.077253 1144345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:32:09.077566 1144345 mustload.go:66] Loading cluster: addons-219291
	I1217 00:32:09.077991 1144345 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:09.078037 1144345 addons.go:622] checking whether the cluster is paused
	I1217 00:32:09.078173 1144345 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:09.078211 1144345 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:32:09.078785 1144345 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:32:09.100253 1144345 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:09.100332 1144345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:32:09.119498 1144345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:32:09.215437 1144345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:32:09.215584 1144345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:32:09.246078 1144345 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:32:09.246100 1144345 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:32:09.246105 1144345 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:32:09.246109 1144345 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:32:09.246112 1144345 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:32:09.246116 1144345 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:32:09.246119 1144345 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:32:09.246123 1144345 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:32:09.246126 1144345 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:32:09.246135 1144345 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:32:09.246140 1144345 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:32:09.246144 1144345 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:32:09.246147 1144345 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:32:09.246150 1144345 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:32:09.246153 1144345 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:32:09.246166 1144345 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:32:09.246173 1144345 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:32:09.246178 1144345 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:32:09.246181 1144345 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:32:09.246184 1144345 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:32:09.246189 1144345 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:32:09.246193 1144345 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:32:09.246196 1144345 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:32:09.246199 1144345 cri.go:89] found id: ""
	I1217 00:32:09.246249 1144345 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:32:09.261551 1144345 out.go:203] 
	W1217 00:32:09.264538 1144345 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:32:09.264561 1144345 out.go:285] * 
	* 
	W1217 00:32:09.272891 1144345 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:32:09.275852 1144345 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.12s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.52392ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-219291
addons_test.go:334: (dbg) Run:  kubectl --context addons-219291 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (298.324727ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:33:07.150217 1145909 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:33:07.151062 1145909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:33:07.151078 1145909 out.go:374] Setting ErrFile to fd 2...
	I1217 00:33:07.151085 1145909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:33:07.151459 1145909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:33:07.151805 1145909 mustload.go:66] Loading cluster: addons-219291
	I1217 00:33:07.152240 1145909 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:33:07.152262 1145909 addons.go:622] checking whether the cluster is paused
	I1217 00:33:07.152409 1145909 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:33:07.152473 1145909 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:33:07.152992 1145909 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:33:07.171195 1145909 ssh_runner.go:195] Run: systemctl --version
	I1217 00:33:07.171255 1145909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:33:07.190585 1145909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:33:07.287637 1145909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:33:07.287756 1145909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:33:07.349732 1145909 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:33:07.349764 1145909 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:33:07.349770 1145909 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:33:07.349780 1145909 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:33:07.349783 1145909 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:33:07.349787 1145909 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:33:07.349790 1145909 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:33:07.349793 1145909 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:33:07.349797 1145909 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:33:07.349803 1145909 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:33:07.349809 1145909 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:33:07.349812 1145909 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:33:07.349816 1145909 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:33:07.349819 1145909 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:33:07.349822 1145909 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:33:07.349827 1145909 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:33:07.349843 1145909 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:33:07.349847 1145909 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:33:07.349851 1145909 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:33:07.349854 1145909 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:33:07.349859 1145909 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:33:07.349862 1145909 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:33:07.349865 1145909 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:33:07.349868 1145909 cri.go:89] found id: ""
	I1217 00:33:07.349954 1145909 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:33:07.366620 1145909 out.go:203] 
	W1217 00:33:07.369515 1145909 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:33:07.369547 1145909 out.go:285] * 
	* 
	W1217 00:33:07.378127 1145909 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:33:07.381018 1145909 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-219291 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-219291 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-219291 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [9b17f137-c54b-40d9-8c8a-61de9e3f13d5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [9b17f137-c54b-40d9-8c8a-61de9e3f13d5] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003982324s
I1217 00:32:30.294705 1136597 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.227865364s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-219291 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-219291
helpers_test.go:244: (dbg) docker inspect addons-219291:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29",
	        "Created": "2025-12-17T00:29:23.60254559Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1138002,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:29:23.670211369Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/hosts",
	        "LogPath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29-json.log",
	        "Name": "/addons-219291",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-219291:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-219291",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29",
	                "LowerDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-219291",
	                "Source": "/var/lib/docker/volumes/addons-219291/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-219291",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-219291",
	                "name.minikube.sigs.k8s.io": "addons-219291",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93b90d16bb3b47acf7c37c78d4acbd32aee1707d21a7cc33a012fe92373ae2a5",
	            "SandboxKey": "/var/run/docker/netns/93b90d16bb3b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33896"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-219291": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:ff:12:2e:03:72",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c734422b54cad368ea21c7b067862f0afc571c532d44186c4767bc4103a3f9d4",
	                    "EndpointID": "d027415df98d1b82093c3dbf222355a12ec44017dc2a80f7ee0f5362d77df53d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-219291",
	                        "c4d712690c2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-219291 -n addons-219291
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-219291 logs -n 25: (1.478865082s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-970516                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-970516 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ --download-only -p binary-mirror-272608 --alsologtostderr --binary-mirror http://127.0.0.1:43199 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-272608   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ -p binary-mirror-272608                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-272608   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ addons  │ enable dashboard -p addons-219291                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-219291                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ start   │ -p addons-219291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:31 UTC │
	│ addons  │ addons-219291 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:31 UTC │                     │
	│ addons  │ addons-219291 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-219291 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:31 UTC │                     │
	│ addons  │ addons-219291 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:31 UTC │                     │
	│ addons  │ addons-219291 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ip      │ addons-219291 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ addons  │ addons-219291 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons  │ addons-219291 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons  │ addons-219291 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh     │ addons-219291 ssh cat /opt/local-path-provisioner/pvc-f193b14d-0c21-4e98-b1c3-e5e46af63ea3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ addons  │ addons-219291 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons  │ addons-219291 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh     │ addons-219291 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons  │ addons-219291 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons  │ addons-219291 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:33 UTC │                     │
	│ addons  │ addons-219291 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:33 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-219291                                                                                                                                                                                                                                                                                                                                                                                           │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:33 UTC │ 17 Dec 25 00:33 UTC │
	│ addons  │ addons-219291 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:33 UTC │                     │
	│ ip      │ addons-219291 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │ 17 Dec 25 00:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:28:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:28:58.475482 1137611 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:28:58.475666 1137611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:58.475702 1137611 out.go:374] Setting ErrFile to fd 2...
	I1217 00:28:58.475723 1137611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:58.476172 1137611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:28:58.477518 1137611 out.go:368] Setting JSON to false
	I1217 00:28:58.478361 1137611 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22289,"bootTime":1765909050,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:28:58.478434 1137611 start.go:143] virtualization:  
	I1217 00:28:58.481881 1137611 out.go:179] * [addons-219291] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:28:58.485626 1137611 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:28:58.485833 1137611 notify.go:221] Checking for updates...
	I1217 00:28:58.491276 1137611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:28:58.494291 1137611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:28:58.497133 1137611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:28:58.500030 1137611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:28:58.502881 1137611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:28:58.506039 1137611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:28:58.531586 1137611 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:28:58.531733 1137611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:58.598751 1137611 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:28:58.588769647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:58.598862 1137611 docker.go:319] overlay module found
	I1217 00:28:58.603972 1137611 out.go:179] * Using the docker driver based on user configuration
	I1217 00:28:58.606929 1137611 start.go:309] selected driver: docker
	I1217 00:28:58.606957 1137611 start.go:927] validating driver "docker" against <nil>
	I1217 00:28:58.606971 1137611 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:28:58.607713 1137611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:58.668995 1137611 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:28:58.65893624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:58.669180 1137611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:28:58.669457 1137611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:28:58.672534 1137611 out.go:179] * Using Docker driver with root privileges
	I1217 00:28:58.675336 1137611 cni.go:84] Creating CNI manager for ""
	I1217 00:28:58.675405 1137611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:28:58.675421 1137611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:28:58.675518 1137611 start.go:353] cluster config:
	{Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 00:28:58.678717 1137611 out.go:179] * Starting "addons-219291" primary control-plane node in "addons-219291" cluster
	I1217 00:28:58.681570 1137611 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:28:58.684505 1137611 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:28:58.687384 1137611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:28:58.687429 1137611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 00:28:58.687441 1137611 cache.go:65] Caching tarball of preloaded images
	I1217 00:28:58.687488 1137611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:28:58.687525 1137611 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:28:58.687536 1137611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:28:58.687911 1137611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/config.json ...
	I1217 00:28:58.687945 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/config.json: {Name:mk097cd69bde9af0d62b4dab5c8cf7c444d62365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:28:58.703390 1137611 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:28:58.703535 1137611 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:28:58.703570 1137611 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 00:28:58.703575 1137611 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 00:28:58.703582 1137611 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 00:28:58.703587 1137611 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 00:29:16.935006 1137611 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 00:29:16.935049 1137611 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:29:16.935103 1137611 start.go:360] acquireMachinesLock for addons-219291: {Name:mk7d4d51d983f82bba701a3615b816bcece5d275 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:29:16.935240 1137611 start.go:364] duration metric: took 110.094µs to acquireMachinesLock for "addons-219291"
	I1217 00:29:16.935272 1137611 start.go:93] Provisioning new machine with config: &{Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:29:16.935351 1137611 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:29:16.938777 1137611 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 00:29:16.939033 1137611 start.go:159] libmachine.API.Create for "addons-219291" (driver="docker")
	I1217 00:29:16.939074 1137611 client.go:173] LocalClient.Create starting
	I1217 00:29:16.939213 1137611 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem
	I1217 00:29:17.226582 1137611 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem
	I1217 00:29:17.413916 1137611 cli_runner.go:164] Run: docker network inspect addons-219291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:29:17.430131 1137611 cli_runner.go:211] docker network inspect addons-219291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:29:17.430234 1137611 network_create.go:284] running [docker network inspect addons-219291] to gather additional debugging logs...
	I1217 00:29:17.430260 1137611 cli_runner.go:164] Run: docker network inspect addons-219291
	W1217 00:29:17.446492 1137611 cli_runner.go:211] docker network inspect addons-219291 returned with exit code 1
	I1217 00:29:17.446537 1137611 network_create.go:287] error running [docker network inspect addons-219291]: docker network inspect addons-219291: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-219291 not found
	I1217 00:29:17.446560 1137611 network_create.go:289] output of [docker network inspect addons-219291]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-219291 not found
	
	** /stderr **
	I1217 00:29:17.446659 1137611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:29:17.462719 1137611 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001947a90}
	I1217 00:29:17.462758 1137611 network_create.go:124] attempt to create docker network addons-219291 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 00:29:17.462814 1137611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-219291 addons-219291
	I1217 00:29:17.525449 1137611 network_create.go:108] docker network addons-219291 192.168.49.0/24 created
	I1217 00:29:17.525488 1137611 kic.go:121] calculated static IP "192.168.49.2" for the "addons-219291" container
	I1217 00:29:17.525562 1137611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:29:17.540205 1137611 cli_runner.go:164] Run: docker volume create addons-219291 --label name.minikube.sigs.k8s.io=addons-219291 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:29:17.559071 1137611 oci.go:103] Successfully created a docker volume addons-219291
	I1217 00:29:17.559180 1137611 cli_runner.go:164] Run: docker run --rm --name addons-219291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-219291 --entrypoint /usr/bin/test -v addons-219291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:29:19.327922 1137611 cli_runner.go:217] Completed: docker run --rm --name addons-219291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-219291 --entrypoint /usr/bin/test -v addons-219291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.76869955s)
	I1217 00:29:19.327952 1137611 oci.go:107] Successfully prepared a docker volume addons-219291
	I1217 00:29:19.328002 1137611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:29:19.328017 1137611 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:29:19.328087 1137611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-219291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:29:23.528612 1137611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-219291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.200482419s)
	I1217 00:29:23.528649 1137611 kic.go:203] duration metric: took 4.200629083s to extract preloaded images to volume ...
	W1217 00:29:23.528789 1137611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 00:29:23.528915 1137611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:29:23.587229 1137611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-219291 --name addons-219291 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-219291 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-219291 --network addons-219291 --ip 192.168.49.2 --volume addons-219291:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:29:23.876165 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Running}}
	I1217 00:29:23.901675 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:23.922095 1137611 cli_runner.go:164] Run: docker exec addons-219291 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:29:23.981200 1137611 oci.go:144] the created container "addons-219291" has a running status.
	I1217 00:29:23.981236 1137611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa...
	I1217 00:29:24.108794 1137611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:29:24.136411 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:24.163852 1137611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:29:24.163879 1137611 kic_runner.go:114] Args: [docker exec --privileged addons-219291 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:29:24.229716 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:24.253893 1137611 machine.go:94] provisionDockerMachine start ...
	I1217 00:29:24.253997 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:24.278554 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:24.278875 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:24.278884 1137611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:29:24.279455 1137611 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56286->127.0.0.1:33893: read: connection reset by peer
	I1217 00:29:27.412313 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-219291
	
	I1217 00:29:27.412335 1137611 ubuntu.go:182] provisioning hostname "addons-219291"
	I1217 00:29:27.412409 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:27.431663 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:27.431993 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:27.432004 1137611 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-219291 && echo "addons-219291" | sudo tee /etc/hostname
	I1217 00:29:27.569734 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-219291
	
	I1217 00:29:27.569813 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:27.586573 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:27.586889 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:27.586912 1137611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-219291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-219291/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-219291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:29:27.716711 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:29:27.716739 1137611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:29:27.716762 1137611 ubuntu.go:190] setting up certificates
	I1217 00:29:27.716781 1137611 provision.go:84] configureAuth start
	I1217 00:29:27.716847 1137611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-219291
	I1217 00:29:27.736455 1137611 provision.go:143] copyHostCerts
	I1217 00:29:27.736534 1137611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:29:27.736650 1137611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:29:27.736705 1137611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:29:27.736754 1137611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.addons-219291 san=[127.0.0.1 192.168.49.2 addons-219291 localhost minikube]
	I1217 00:29:28.368211 1137611 provision.go:177] copyRemoteCerts
	I1217 00:29:28.368290 1137611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:29:28.368333 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:28.385274 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:28.480585 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:29:28.498025 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:29:28.516163 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:29:28.533796 1137611 provision.go:87] duration metric: took 816.987711ms to configureAuth
	I1217 00:29:28.533823 1137611 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:29:28.534011 1137611 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:29:28.534108 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:28.551174 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:28.551473 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:28.551486 1137611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:29:28.839881 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:29:28.839907 1137611 machine.go:97] duration metric: took 4.585994146s to provisionDockerMachine
	I1217 00:29:28.839919 1137611 client.go:176] duration metric: took 11.90083791s to LocalClient.Create
	I1217 00:29:28.839933 1137611 start.go:167] duration metric: took 11.900903247s to libmachine.API.Create "addons-219291"
	I1217 00:29:28.839940 1137611 start.go:293] postStartSetup for "addons-219291" (driver="docker")
	I1217 00:29:28.839950 1137611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:29:28.840031 1137611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:29:28.840077 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:28.860967 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:28.960734 1137611 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:29:28.964300 1137611 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:29:28.964332 1137611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:29:28.964351 1137611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:29:28.964445 1137611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:29:28.964477 1137611 start.go:296] duration metric: took 124.53156ms for postStartSetup
	I1217 00:29:28.964825 1137611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-219291
	I1217 00:29:28.982820 1137611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/config.json ...
	I1217 00:29:28.983121 1137611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:29:28.983174 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:29.000632 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:29.093592 1137611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:29:29.098293 1137611 start.go:128] duration metric: took 12.162926732s to createHost
	I1217 00:29:29.098322 1137611 start.go:83] releasing machines lock for "addons-219291", held for 12.163067914s
	I1217 00:29:29.098406 1137611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-219291
	I1217 00:29:29.115379 1137611 ssh_runner.go:195] Run: cat /version.json
	I1217 00:29:29.115391 1137611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:29:29.115431 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:29.115454 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:29.135338 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:29.135923 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:29.324290 1137611 ssh_runner.go:195] Run: systemctl --version
	I1217 00:29:29.330603 1137611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:29:29.366794 1137611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:29:29.371235 1137611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:29:29.371318 1137611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:29:29.400111 1137611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 00:29:29.400185 1137611 start.go:496] detecting cgroup driver to use...
	I1217 00:29:29.400237 1137611 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:29:29.400320 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:29:29.418047 1137611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:29:29.430534 1137611 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:29:29.430602 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:29:29.448043 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:29:29.465618 1137611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:29:29.585931 1137611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:29:29.713860 1137611 docker.go:234] disabling docker service ...
	I1217 00:29:29.713980 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:29:29.735715 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:29:29.748796 1137611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:29:29.874818 1137611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:29:30.016234 1137611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:29:30.043132 1137611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:29:30.063416 1137611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:29:30.063577 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.075262 1137611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:29:30.075373 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.086498 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.097618 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.109129 1137611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:29:30.117843 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.127124 1137611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.141693 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.151096 1137611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:29:30.159312 1137611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:29:30.167161 1137611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:29:30.275269 1137611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:29:30.464222 1137611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:29:30.464367 1137611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:29:30.468247 1137611 start.go:564] Will wait 60s for crictl version
	I1217 00:29:30.468316 1137611 ssh_runner.go:195] Run: which crictl
	I1217 00:29:30.471882 1137611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:29:30.497522 1137611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:29:30.497672 1137611 ssh_runner.go:195] Run: crio --version
	I1217 00:29:30.526333 1137611 ssh_runner.go:195] Run: crio --version
	I1217 00:29:30.556996 1137611 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:29:30.559896 1137611 cli_runner.go:164] Run: docker network inspect addons-219291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:29:30.576214 1137611 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:29:30.579961 1137611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:29:30.589427 1137611 kubeadm.go:884] updating cluster {Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:29:30.589538 1137611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:29:30.589596 1137611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:29:30.622882 1137611 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:29:30.622902 1137611 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:29:30.622958 1137611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:29:30.648583 1137611 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:29:30.648659 1137611 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:29:30.648682 1137611 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 00:29:30.648795 1137611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-219291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:29:30.648904 1137611 ssh_runner.go:195] Run: crio config
	I1217 00:29:30.703965 1137611 cni.go:84] Creating CNI manager for ""
	I1217 00:29:30.703985 1137611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:29:30.704027 1137611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:29:30.704062 1137611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-219291 NodeName:addons-219291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:29:30.704237 1137611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-219291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:29:30.704329 1137611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:29:30.712118 1137611 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:29:30.712200 1137611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:29:30.719732 1137611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 00:29:30.732970 1137611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:29:30.746036 1137611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1217 00:29:30.758176 1137611 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:29:30.761520 1137611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:29:30.771061 1137611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:29:30.879904 1137611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:29:30.895215 1137611 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291 for IP: 192.168.49.2
	I1217 00:29:30.895279 1137611 certs.go:195] generating shared ca certs ...
	I1217 00:29:30.895314 1137611 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:30.895475 1137611 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:29:31.281739 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt ...
	I1217 00:29:31.281783 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt: {Name:mk92394348a9935f40213952c9d4fb2eda7e5498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.282003 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key ...
	I1217 00:29:31.282017 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key: {Name:mkda72fe613060d20a63f3cac79dba6dd39106e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.282127 1137611 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:29:31.457029 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt ...
	I1217 00:29:31.457068 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt: {Name:mk938c89539bbbe196bc596f338cfab76af6c380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.457349 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key ...
	I1217 00:29:31.457367 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key: {Name:mkfeda5155b13c1f96e38fb52393a3561bb8db26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.457475 1137611 certs.go:257] generating profile certs ...
	I1217 00:29:31.457548 1137611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.key
	I1217 00:29:31.457566 1137611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt with IP's: []
	I1217 00:29:31.701779 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt ...
	I1217 00:29:31.701817 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: {Name:mk88410de71422f0cc13c1a134a421e6c8a8bc16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.702052 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.key ...
	I1217 00:29:31.702067 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.key: {Name:mkb3ddac70f38b743f729f188f949c6969ac609d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.702177 1137611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49
	I1217 00:29:31.702200 1137611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 00:29:31.800081 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49 ...
	I1217 00:29:31.800113 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49: {Name:mkf055a340a300d8c006061c460891ea08ff8a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.800324 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49 ...
	I1217 00:29:31.800342 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49: {Name:mk79dc3e4ab29523777a4fa252cc6966bb976355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.800449 1137611 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt
	I1217 00:29:31.800535 1137611 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key
	I1217 00:29:31.800594 1137611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key
	I1217 00:29:31.800618 1137611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt with IP's: []
	I1217 00:29:32.020248 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt ...
	I1217 00:29:32.020283 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt: {Name:mk34d2a918d25278db931914d99e52a66d5a9615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:32.020518 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key ...
	I1217 00:29:32.020543 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key: {Name:mk7ef58b405dea739bf6abc0d863015d75941ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:32.020734 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:29:32.020781 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:29:32.020816 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:29:32.020845 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:29:32.021505 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:29:32.041678 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:29:32.065429 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:29:32.085479 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:29:32.103497 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:29:32.121631 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:29:32.141504 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:29:32.160920 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:29:32.178686 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:29:32.196567 1137611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:29:32.209487 1137611 ssh_runner.go:195] Run: openssl version
	I1217 00:29:32.215724 1137611 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.223105 1137611 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:29:32.230605 1137611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.234502 1137611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.234573 1137611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.275767 1137611 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:29:32.283372 1137611 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:29:32.290921 1137611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:29:32.294705 1137611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:29:32.294759 1137611 kubeadm.go:401] StartCluster: {Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:29:32.294847 1137611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:29:32.294914 1137611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:29:32.321773 1137611 cri.go:89] found id: ""
	I1217 00:29:32.321880 1137611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:29:32.330432 1137611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:29:32.338128 1137611 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:29:32.338229 1137611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:29:32.345903 1137611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:29:32.345923 1137611 kubeadm.go:158] found existing configuration files:
	
	I1217 00:29:32.345992 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:29:32.353924 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:29:32.354046 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:29:32.361554 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:29:32.369329 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:29:32.369424 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:29:32.376983 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:29:32.384562 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:29:32.384629 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:29:32.392132 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:29:32.399834 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:29:32.399954 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:29:32.407247 1137611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:29:32.447089 1137611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:29:32.447152 1137611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:29:32.487326 1137611 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:29:32.487413 1137611 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:29:32.487452 1137611 kubeadm.go:319] OS: Linux
	I1217 00:29:32.487506 1137611 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:29:32.487559 1137611 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:29:32.487619 1137611 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:29:32.487680 1137611 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:29:32.487737 1137611 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:29:32.487795 1137611 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:29:32.487855 1137611 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:29:32.487918 1137611 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:29:32.487972 1137611 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:29:32.565314 1137611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:29:32.565519 1137611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:29:32.565657 1137611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:29:32.573198 1137611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:29:32.580041 1137611 out.go:252]   - Generating certificates and keys ...
	I1217 00:29:32.580171 1137611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:29:32.580255 1137611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:29:33.056240 1137611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:29:34.056761 1137611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:29:34.189451 1137611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:29:34.266607 1137611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:29:34.752860 1137611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:29:34.753055 1137611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-219291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:29:35.373413 1137611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:29:35.373568 1137611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-219291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:29:35.908836 1137611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:29:37.145510 1137611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:29:37.699579 1137611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:29:37.699866 1137611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:29:37.835740 1137611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:29:38.221952 1137611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:29:38.539901 1137611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:29:39.500364 1137611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:29:39.954127 1137611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:29:39.954830 1137611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:29:39.957632 1137611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:29:39.961029 1137611 out.go:252]   - Booting up control plane ...
	I1217 00:29:39.961186 1137611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:29:39.961292 1137611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:29:39.961372 1137611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:29:39.988595 1137611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:29:39.988716 1137611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:29:39.997109 1137611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:29:39.997486 1137611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:29:39.997816 1137611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:29:40.153071 1137611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:29:40.153195 1137611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:29:40.652534 1137611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.762751ms
	I1217 00:29:40.655968 1137611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:29:40.656061 1137611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 00:29:40.656151 1137611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:29:40.656229 1137611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:29:44.538120 1137611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.881713205s
	I1217 00:29:45.380161 1137611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.724193711s
	I1217 00:29:47.157802 1137611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501681113s
	I1217 00:29:47.189826 1137611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:29:47.205166 1137611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:29:47.222886 1137611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:29:47.223107 1137611 kubeadm.go:319] [mark-control-plane] Marking the node addons-219291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:29:47.238189 1137611 kubeadm.go:319] [bootstrap-token] Using token: 5b0eqd.nc2k7xajx6gxbf2i
	I1217 00:29:47.241254 1137611 out.go:252]   - Configuring RBAC rules ...
	I1217 00:29:47.241393 1137611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:29:47.250387 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:29:47.263593 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:29:47.268194 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:29:47.272806 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:29:47.278070 1137611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:29:47.565001 1137611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:29:48.014413 1137611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:29:48.564932 1137611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:29:48.566514 1137611 kubeadm.go:319] 
	I1217 00:29:48.566608 1137611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:29:48.566619 1137611 kubeadm.go:319] 
	I1217 00:29:48.566706 1137611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:29:48.566717 1137611 kubeadm.go:319] 
	I1217 00:29:48.566749 1137611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:29:48.566822 1137611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:29:48.566885 1137611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:29:48.566900 1137611 kubeadm.go:319] 
	I1217 00:29:48.566964 1137611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:29:48.566973 1137611 kubeadm.go:319] 
	I1217 00:29:48.567028 1137611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:29:48.567037 1137611 kubeadm.go:319] 
	I1217 00:29:48.567093 1137611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:29:48.567178 1137611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:29:48.567255 1137611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:29:48.567263 1137611 kubeadm.go:319] 
	I1217 00:29:48.567350 1137611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:29:48.567434 1137611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:29:48.567442 1137611 kubeadm.go:319] 
	I1217 00:29:48.567539 1137611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5b0eqd.nc2k7xajx6gxbf2i \
	I1217 00:29:48.567672 1137611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 \
	I1217 00:29:48.567702 1137611 kubeadm.go:319] 	--control-plane 
	I1217 00:29:48.567710 1137611 kubeadm.go:319] 
	I1217 00:29:48.567813 1137611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:29:48.567821 1137611 kubeadm.go:319] 
	I1217 00:29:48.567913 1137611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5b0eqd.nc2k7xajx6gxbf2i \
	I1217 00:29:48.568018 1137611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 
	I1217 00:29:48.570852 1137611 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 00:29:48.571117 1137611 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:29:48.571236 1137611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:29:48.571261 1137611 cni.go:84] Creating CNI manager for ""
	I1217 00:29:48.571269 1137611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:29:48.574595 1137611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:29:48.577523 1137611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:29:48.582113 1137611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:29:48.582136 1137611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:29:48.596346 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:29:48.897636 1137611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:29:48.897720 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:48.897789 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-219291 minikube.k8s.io/updated_at=2025_12_17T00_29_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=addons-219291 minikube.k8s.io/primary=true
	I1217 00:29:49.099267 1137611 ops.go:34] apiserver oom_adj: -16
	I1217 00:29:49.099377 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:49.599498 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:50.099642 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:50.600064 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:51.099580 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:51.599922 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:52.100163 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:52.600464 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:53.100196 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:53.201230 1137611 kubeadm.go:1114] duration metric: took 4.303571467s to wait for elevateKubeSystemPrivileges
	I1217 00:29:53.201263 1137611 kubeadm.go:403] duration metric: took 20.906508288s to StartCluster
	I1217 00:29:53.201282 1137611 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:53.201402 1137611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:29:53.201798 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:53.202004 1137611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:29:53.202143 1137611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:29:53.202386 1137611 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:29:53.202426 1137611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 00:29:53.202508 1137611 addons.go:70] Setting yakd=true in profile "addons-219291"
	I1217 00:29:53.202529 1137611 addons.go:239] Setting addon yakd=true in "addons-219291"
	I1217 00:29:53.202552 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.203009 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.203509 1137611 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-219291"
	I1217 00:29:53.203534 1137611 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-219291"
	I1217 00:29:53.203557 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.203987 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.204135 1137611 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-219291"
	I1217 00:29:53.204158 1137611 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-219291"
	I1217 00:29:53.204183 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.204648 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.207324 1137611 addons.go:70] Setting cloud-spanner=true in profile "addons-219291"
	I1217 00:29:53.207360 1137611 addons.go:239] Setting addon cloud-spanner=true in "addons-219291"
	I1217 00:29:53.207397 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.207455 1137611 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-219291"
	I1217 00:29:53.207502 1137611 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-219291"
	I1217 00:29:53.207527 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.207916 1137611 addons.go:70] Setting registry=true in profile "addons-219291"
	I1217 00:29:53.207964 1137611 addons.go:239] Setting addon registry=true in "addons-219291"
	I1217 00:29:53.207979 1137611 addons.go:70] Setting gcp-auth=true in profile "addons-219291"
	I1217 00:29:53.208009 1137611 mustload.go:66] Loading cluster: addons-219291
	I1217 00:29:53.208043 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.208150 1137611 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:29:53.208349 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.208598 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.207969 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.220118 1137611 addons.go:70] Setting ingress=true in profile "addons-219291"
	I1217 00:29:53.220151 1137611 addons.go:239] Setting addon ingress=true in "addons-219291"
	I1217 00:29:53.220209 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.220778 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.221499 1137611 addons.go:70] Setting ingress-dns=true in profile "addons-219291"
	I1217 00:29:53.221532 1137611 addons.go:239] Setting addon ingress-dns=true in "addons-219291"
	I1217 00:29:53.221617 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.222307 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.207974 1137611 addons.go:70] Setting default-storageclass=true in profile "addons-219291"
	I1217 00:29:53.230765 1137611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-219291"
	I1217 00:29:53.230961 1137611 addons.go:70] Setting registry-creds=true in profile "addons-219291"
	I1217 00:29:53.230986 1137611 addons.go:239] Setting addon registry-creds=true in "addons-219291"
	I1217 00:29:53.231015 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.231419 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.235502 1137611 addons.go:70] Setting inspektor-gadget=true in profile "addons-219291"
	I1217 00:29:53.235537 1137611 addons.go:239] Setting addon inspektor-gadget=true in "addons-219291"
	I1217 00:29:53.235626 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.236251 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.251969 1137611 addons.go:70] Setting storage-provisioner=true in profile "addons-219291"
	I1217 00:29:53.252011 1137611 addons.go:239] Setting addon storage-provisioner=true in "addons-219291"
	I1217 00:29:53.252047 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.252552 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.256406 1137611 addons.go:70] Setting metrics-server=true in profile "addons-219291"
	I1217 00:29:53.256461 1137611 addons.go:239] Setting addon metrics-server=true in "addons-219291"
	I1217 00:29:53.256495 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.256977 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.275958 1137611 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-219291"
	I1217 00:29:53.276255 1137611 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-219291"
	I1217 00:29:53.280669 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.287998 1137611 out.go:179] * Verifying Kubernetes components...
	I1217 00:29:53.292786 1137611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:29:53.293452 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.310477 1137611 addons.go:70] Setting volcano=true in profile "addons-219291"
	I1217 00:29:53.310558 1137611 addons.go:239] Setting addon volcano=true in "addons-219291"
	I1217 00:29:53.310609 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.311116 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.336245 1137611 addons.go:70] Setting volumesnapshots=true in profile "addons-219291"
	I1217 00:29:53.336348 1137611 addons.go:239] Setting addon volumesnapshots=true in "addons-219291"
	I1217 00:29:53.336410 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.337461 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.350452 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.371652 1137611 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 00:29:53.374612 1137611 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:29:53.374635 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 00:29:53.374705 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.460463 1137611 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 00:29:53.466659 1137611 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 00:29:53.470947 1137611 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1217 00:29:53.478359 1137611 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 00:29:53.487014 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 00:29:53.487044 1137611 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 00:29:53.487126 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.501150 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 00:29:53.501627 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 00:29:53.503449 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.511215 1137611 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:29:53.511236 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 00:29:53.511304 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.514149 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 00:29:53.514402 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.517562 1137611 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1217 00:29:53.519060 1137611 addons.go:239] Setting addon default-storageclass=true in "addons-219291"
	I1217 00:29:53.519097 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.519512 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	W1217 00:29:53.536867 1137611 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 00:29:53.543360 1137611 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-219291"
	I1217 00:29:53.543414 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.543893 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.554483 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 00:29:53.554759 1137611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:29:53.555792 1137611 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:29:53.555810 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 00:29:53.555863 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.561003 1137611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:29:53.561022 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:29:53.561100 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.577450 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 00:29:53.577866 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:29:53.581814 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 00:29:53.601503 1137611 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 00:29:53.609425 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:29:53.609655 1137611 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:29:53.609701 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 00:29:53.609803 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.616665 1137611 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:29:53.616729 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 00:29:53.616820 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.623983 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 00:29:53.624058 1137611 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 00:29:53.624150 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.626833 1137611 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 00:29:53.641537 1137611 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 00:29:53.641632 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 00:29:53.641747 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.665939 1137611 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 00:29:53.676628 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 00:29:53.676816 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:29:53.676871 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 00:29:53.677003 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.678637 1137611 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 00:29:53.685984 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 00:29:53.686132 1137611 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 00:29:53.686294 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.698461 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.699695 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 00:29:53.704628 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 00:29:53.712547 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 00:29:53.716672 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 00:29:53.718789 1137611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:29:53.718808 1137611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:29:53.718870 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.725522 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 00:29:53.731549 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 00:29:53.731576 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 00:29:53.731652 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.754078 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.757220 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.764131 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.786642 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.792175 1137611 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 00:29:53.796826 1137611 out.go:179]   - Using image docker.io/busybox:stable
	I1217 00:29:53.799784 1137611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:29:53.799812 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 00:29:53.799876 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.827102 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.840580 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.895477 1137611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:29:53.895667 1137611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:29:53.907000 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.917009 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.917477 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.924718 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.930747 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.930747 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.938509 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.944293 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:54.501877 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:29:54.541528 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 00:29:54.541596 1137611 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 00:29:54.565306 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:29:54.611642 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:29:54.615534 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:29:54.618222 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 00:29:54.618246 1137611 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 00:29:54.685566 1137611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 00:29:54.685596 1137611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 00:29:54.689391 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:29:54.689417 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 00:29:54.701945 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 00:29:54.701973 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 00:29:54.715291 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:29:54.727290 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 00:29:54.782972 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 00:29:54.782999 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 00:29:54.789931 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:29:54.793930 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:29:54.805234 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:29:54.808430 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 00:29:54.808456 1137611 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 00:29:54.842412 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:29:54.913962 1137611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 00:29:54.913991 1137611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 00:29:54.943454 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 00:29:54.943487 1137611 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 00:29:54.945910 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:29:55.033431 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 00:29:55.033457 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 00:29:55.080994 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 00:29:55.081016 1137611 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 00:29:55.147051 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:29:55.147088 1137611 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 00:29:55.163299 1137611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 00:29:55.163327 1137611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 00:29:55.252538 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 00:29:55.252567 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 00:29:55.261716 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:29:55.261760 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 00:29:55.325884 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 00:29:55.325915 1137611 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 00:29:55.340903 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:29:55.429020 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:29:55.441800 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 00:29:55.441828 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 00:29:55.523159 1137611 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:29:55.523193 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 00:29:55.639845 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 00:29:55.639874 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 00:29:55.725624 1137611 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.83010308s)
	I1217 00:29:55.725681 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.223736317s)
	I1217 00:29:55.725817 1137611 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.830136901s)
	I1217 00:29:55.725835 1137611 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 00:29:55.727158 1137611 node_ready.go:35] waiting up to 6m0s for node "addons-219291" to be "Ready" ...
	I1217 00:29:55.856100 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:29:55.959028 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 00:29:55.959054 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 00:29:56.232472 1137611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-219291" context rescaled to 1 replicas
	I1217 00:29:56.234776 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 00:29:56.234815 1137611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 00:29:56.372878 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 00:29:56.372919 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 00:29:56.649174 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 00:29:56.649248 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 00:29:56.988484 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:29:56.988553 1137611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 00:29:57.137192 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1217 00:29:57.735333 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:29:59.649164 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.033558926s)
	I1217 00:29:59.649264 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.933946846s)
	I1217 00:29:59.649499 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.922182506s)
	I1217 00:29:59.649572 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.859612721s)
	I1217 00:29:59.649625 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.855670742s)
	I1217 00:29:59.649690 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.844433545s)
	I1217 00:29:59.649716 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.807282224s)
	I1217 00:29:59.649844 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.703903234s)
	I1217 00:29:59.649863 1137611 addons.go:495] Verifying addon registry=true in "addons-219291"
	I1217 00:29:59.649989 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.037394855s)
	I1217 00:29:59.649974 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.084596227s)
	I1217 00:29:59.650075 1137611 addons.go:495] Verifying addon ingress=true in "addons-219291"
	I1217 00:29:59.650129 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.30919103s)
	I1217 00:29:59.650151 1137611 addons.go:495] Verifying addon metrics-server=true in "addons-219291"
	I1217 00:29:59.650197 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.221139258s)
	I1217 00:29:59.650610 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.794480844s)
	W1217 00:29:59.651017 1137611 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:29:59.651045 1137611 retry.go:31] will retry after 249.132448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:29:59.652891 1137611 out.go:179] * Verifying registry addon...
	I1217 00:29:59.652992 1137611 out.go:179] * Verifying ingress addon...
	I1217 00:29:59.655016 1137611 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-219291 service yakd-dashboard -n yakd-dashboard
	
	I1217 00:29:59.657784 1137611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 00:29:59.658706 1137611 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 00:29:59.702184 1137611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:29:59.702211 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:29:59.703512 1137611 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 00:29:59.703539 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:29:59.709075 1137611 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	W1217 00:29:59.738428 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:29:59.900890 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:29:59.977059 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.83980806s)
	I1217 00:29:59.977106 1137611 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-219291"
	I1217 00:29:59.980099 1137611 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 00:29:59.983794 1137611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 00:29:59.994154 1137611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:29:59.994180 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:00.195267 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:00.195541 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:00.493131 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:00.665141 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:00.669464 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:00.988610 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:01.165898 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:01.166256 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:01.166779 1137611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 00:30:01.166887 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:30:01.188245 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:30:01.304297 1137611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 00:30:01.324972 1137611 addons.go:239] Setting addon gcp-auth=true in "addons-219291"
	I1217 00:30:01.325141 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:30:01.325667 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:30:01.347828 1137611 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 00:30:01.347893 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:30:01.367138 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:30:01.487718 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:01.660690 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:01.661451 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:01.987489 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:02.162461 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:02.162549 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 00:30:02.230245 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:02.487393 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:02.662050 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:02.662088 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:02.987278 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:03.156878 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.25593985s)
	I1217 00:30:03.156942 1137611 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.809087386s)
	I1217 00:30:03.160483 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:30:03.163138 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:03.163871 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:03.166160 1137611 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 00:30:03.168958 1137611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 00:30:03.168989 1137611 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 00:30:03.182430 1137611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 00:30:03.182493 1137611 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 00:30:03.196669 1137611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:30:03.196694 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 00:30:03.210115 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:30:03.487185 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:03.665961 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:03.667014 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:03.710375 1137611 addons.go:495] Verifying addon gcp-auth=true in "addons-219291"
	I1217 00:30:03.713524 1137611 out.go:179] * Verifying gcp-auth addon...
	I1217 00:30:03.717087 1137611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 00:30:03.721386 1137611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 00:30:03.721451 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:03.987554 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:04.161910 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:04.162054 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:04.220978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:04.230756 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:04.486994 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:04.661242 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:04.662487 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:04.720167 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:04.987701 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:05.162346 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:05.163449 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:05.220507 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:05.486799 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:05.661051 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:05.661827 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:05.721801 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:05.987185 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:06.163079 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:06.163361 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:06.220126 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:06.231100 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:06.487060 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:06.660800 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:06.661869 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:06.720825 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:06.987739 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:07.161510 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:07.162070 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:07.220056 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:07.487353 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:07.661814 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:07.661924 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:07.720961 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:07.988072 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:08.161090 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:08.162377 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:08.221036 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:08.487440 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:08.661927 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:08.662046 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:08.720955 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:08.730609 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:08.986824 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:09.161977 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:09.162142 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:09.220121 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:09.487457 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:09.661756 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:09.661942 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:09.720681 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:09.987468 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:10.161920 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:10.162037 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:10.221991 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:10.487305 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:10.662025 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:10.662262 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:10.719918 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:10.731812 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:10.987074 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:11.162190 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:11.162859 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:11.220410 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:11.487320 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:11.661948 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:11.662147 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:11.720795 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:11.987589 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:12.161834 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:12.162004 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:12.220938 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:12.487062 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:12.661069 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:12.661615 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:12.720587 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:12.987250 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:13.161393 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:13.162092 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:13.221195 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:13.230912 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:13.487271 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:13.661994 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:13.662147 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:13.720080 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:13.987632 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:14.160873 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:14.162468 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:14.220410 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:14.487566 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:14.662059 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:14.662308 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:14.720003 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:14.987334 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:15.163232 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:15.163353 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:15.220388 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:15.487452 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:15.661601 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:15.661754 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:15.720573 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:15.730212 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:15.987653 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:16.162236 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:16.162351 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:16.221221 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:16.487616 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:16.661865 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:16.662509 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:16.720173 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:16.987645 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:17.160745 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:17.162353 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:17.220286 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:17.487327 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:17.661285 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:17.662001 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:17.720899 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:17.730635 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:17.988129 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:18.162261 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:18.162973 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:18.221558 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:18.487716 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:18.661678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:18.661935 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:18.721006 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:18.986931 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:19.161189 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:19.162115 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:19.219900 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:19.486832 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:19.661972 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:19.662102 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:19.719985 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:19.987818 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:20.160518 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:20.162233 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:20.220511 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:20.230185 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:20.486808 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:20.663316 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:20.665129 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:20.719983 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:20.986688 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:21.162834 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:21.163247 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:21.219848 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:21.487673 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:21.661661 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:21.661859 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:21.720264 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:21.987213 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:22.161163 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:22.162216 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:22.220114 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:22.231081 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:22.487265 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:22.661756 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:22.661892 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:22.720628 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:22.986649 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:23.160595 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:23.161889 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:23.220724 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:23.487759 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:23.662751 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:23.662949 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:23.720696 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:23.988059 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:24.161178 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:24.162413 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:24.220231 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:24.231231 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:24.487442 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:24.662027 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:24.662234 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:24.721299 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:24.987809 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:25.171860 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:25.172550 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:25.223617 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:25.487085 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:25.661060 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:25.662405 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:25.719928 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:25.987327 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:26.162753 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:26.162942 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:26.220659 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:26.487190 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:26.662295 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:26.662804 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:26.720633 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:26.730665 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:26.986978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:27.161414 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:27.162298 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:27.220963 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:27.487799 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:27.660868 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:27.661313 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:27.720147 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:27.986746 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:28.162022 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:28.162622 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:28.220329 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:28.487115 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:28.661152 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:28.663013 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:28.720869 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:28.991632 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:29.162247 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:29.162545 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:29.220195 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:29.231646 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:29.486547 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:29.662028 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:29.662717 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:29.720274 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:29.987560 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:30.161940 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:30.162145 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:30.221500 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:30.486661 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:30.660699 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:30.661936 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:30.720697 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:30.987372 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:31.161985 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:31.162336 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:31.220966 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:31.486999 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:31.661873 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:31.662235 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:31.720119 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:31.731670 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:31.986953 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:32.161150 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:32.162130 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:32.220925 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:32.486929 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:32.661168 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:32.661895 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:32.720647 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:32.986904 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:33.161333 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:33.162214 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:33.220075 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:33.487062 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:33.661813 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:33.661885 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:33.721004 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:33.987054 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:34.160856 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:34.163176 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:34.220812 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:34.230453 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:34.487478 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:34.661987 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:34.662365 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:34.720092 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:35.010678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:35.165318 1137611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:30:35.165345 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:35.168994 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:35.234348 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:35.245841 1137611 node_ready.go:49] node "addons-219291" is "Ready"
	I1217 00:30:35.245877 1137611 node_ready.go:38] duration metric: took 39.51860602s for node "addons-219291" to be "Ready" ...
	I1217 00:30:35.245893 1137611 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:30:35.245953 1137611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:30:35.276291 1137611 api_server.go:72] duration metric: took 42.07423472s to wait for apiserver process to appear ...
	I1217 00:30:35.276317 1137611 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:30:35.276336 1137611 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 00:30:35.288943 1137611 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 00:30:35.290226 1137611 api_server.go:141] control plane version: v1.34.2
	I1217 00:30:35.290263 1137611 api_server.go:131] duration metric: took 13.939082ms to wait for apiserver health ...
	I1217 00:30:35.290274 1137611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:30:35.308577 1137611 system_pods.go:59] 19 kube-system pods found
	I1217 00:30:35.308614 1137611 system_pods.go:61] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:35.308622 1137611 system_pods.go:61] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending
	I1217 00:30:35.308637 1137611 system_pods.go:61] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending
	I1217 00:30:35.308641 1137611 system_pods.go:61] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending
	I1217 00:30:35.308645 1137611 system_pods.go:61] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:35.308655 1137611 system_pods.go:61] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:35.308659 1137611 system_pods.go:61] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:35.308669 1137611 system_pods.go:61] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:35.308674 1137611 system_pods.go:61] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending
	I1217 00:30:35.308677 1137611 system_pods.go:61] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:35.308681 1137611 system_pods.go:61] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:35.308689 1137611 system_pods.go:61] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:35.308705 1137611 system_pods.go:61] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending
	I1217 00:30:35.308718 1137611 system_pods.go:61] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:35.308724 1137611 system_pods.go:61] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:35.308733 1137611 system_pods.go:61] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:35.308739 1137611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.308746 1137611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending
	I1217 00:30:35.308752 1137611 system_pods.go:61] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:35.308762 1137611 system_pods.go:74] duration metric: took 18.481891ms to wait for pod list to return data ...
	I1217 00:30:35.308771 1137611 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:30:35.335760 1137611 default_sa.go:45] found service account: "default"
	I1217 00:30:35.335790 1137611 default_sa.go:55] duration metric: took 27.001163ms for default service account to be created ...
	I1217 00:30:35.335801 1137611 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:30:35.462372 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:35.462419 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:35.462426 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending
	I1217 00:30:35.462432 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending
	I1217 00:30:35.462437 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending
	I1217 00:30:35.462440 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:35.462445 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:35.462450 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:35.462463 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:35.462472 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending
	I1217 00:30:35.462476 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:35.462482 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:35.462494 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:35.462499 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending
	I1217 00:30:35.462512 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:35.462518 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:35.462529 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:35.462544 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.462551 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending
	I1217 00:30:35.462557 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:35.462575 1137611 retry.go:31] will retry after 233.455932ms: missing components: kube-dns
	I1217 00:30:35.508255 1137611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:30:35.508282 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:35.668256 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:35.668356 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:35.705307 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:35.705354 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:35.705362 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending
	I1217 00:30:35.705368 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending
	I1217 00:30:35.705375 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:30:35.705381 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:35.705388 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:35.705392 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:35.705397 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:35.705410 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:30:35.705426 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:35.705436 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:35.705442 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:35.705451 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:30:35.705461 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:35.705467 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:35.705473 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:35.705481 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.705490 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.705507 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:35.705524 1137611 retry.go:31] will retry after 378.643204ms: missing components: kube-dns
	I1217 00:30:35.732108 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:35.989479 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:36.099704 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:36.099743 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:36.099760 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:30:36.099772 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:30:36.099782 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:30:36.099787 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:36.099801 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:36.099805 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:36.099810 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:36.099822 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:30:36.099826 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:36.099838 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:36.099848 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:36.099857 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:30:36.099865 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:36.099872 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:36.099878 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:36.099886 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.099898 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.099904 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:36.099933 1137611 retry.go:31] will retry after 342.296446ms: missing components: kube-dns
	I1217 00:30:36.186599 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:36.186995 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:36.237314 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:36.448136 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:36.448172 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Running
	I1217 00:30:36.448184 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:30:36.448193 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:30:36.448200 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:30:36.448205 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:36.448214 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:36.448219 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:36.448230 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:36.448238 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:30:36.448262 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:36.448267 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:36.448275 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:36.448287 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:30:36.448295 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:36.448301 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:36.448306 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:36.448325 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.448337 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.448344 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Running
	I1217 00:30:36.448352 1137611 system_pods.go:126] duration metric: took 1.112545517s to wait for k8s-apps to be running ...
	I1217 00:30:36.448360 1137611 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:30:36.448455 1137611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:30:36.463092 1137611 system_svc.go:56] duration metric: took 14.72198ms WaitForService to wait for kubelet
	I1217 00:30:36.463131 1137611 kubeadm.go:587] duration metric: took 43.261094432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:30:36.463148 1137611 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:30:36.466629 1137611 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 00:30:36.466662 1137611 node_conditions.go:123] node cpu capacity is 2
	I1217 00:30:36.466682 1137611 node_conditions.go:105] duration metric: took 3.527809ms to run NodePressure ...
	I1217 00:30:36.466711 1137611 start.go:242] waiting for startup goroutines ...
	I1217 00:30:36.488349 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:36.662311 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:36.662939 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:36.721137 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:36.988457 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:37.162101 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:37.163412 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:37.220244 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:37.487731 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:37.662601 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:37.662839 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:37.721054 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:37.988556 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:38.164298 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:38.164840 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:38.263238 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:38.487807 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:38.670496 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:38.670788 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:38.724862 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:38.987966 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:39.165219 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:39.165388 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:39.220651 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:39.492399 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:39.660676 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:39.663042 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:39.719953 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:39.988066 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:40.164072 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:40.165226 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:40.220184 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:40.488052 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:40.662158 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:40.662542 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:40.720204 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:40.987398 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:41.161947 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:41.162166 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:41.219841 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:41.487056 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:41.663908 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:41.665586 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:41.720875 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:41.987987 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:42.164567 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:42.164951 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:42.221373 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:42.487832 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:42.661445 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:42.662894 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:42.720730 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:42.991861 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:43.160959 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:43.163376 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:43.220036 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:43.487290 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:43.663503 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:43.663833 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:43.721125 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:43.987316 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:44.162418 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:44.164369 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:44.220945 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:44.492892 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:44.661431 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:44.662709 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:44.720657 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:44.986982 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:45.190019 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:45.191046 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:45.233938 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:45.489320 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:45.662717 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:45.663002 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:45.721052 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:45.987289 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:46.161937 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:46.162129 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:46.221130 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:46.488203 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:46.661638 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:46.662253 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:46.720229 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:46.987702 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:47.162064 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:47.162174 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:47.220396 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:47.488926 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:47.663577 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:47.663958 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:47.721089 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:47.987912 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:48.162289 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:48.162404 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:48.220815 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:48.487814 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:48.662590 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:48.662757 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:48.721325 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:48.988160 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:49.164972 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:49.165495 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:49.220905 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:49.487882 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:49.663978 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:49.664630 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:49.720852 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:49.987535 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:50.164267 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:50.164665 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:50.221200 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:50.487634 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:50.663759 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:50.664255 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:50.719964 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:50.988536 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:51.162864 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:51.163424 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:51.220696 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:51.487170 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:51.663898 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:51.664080 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:51.719918 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:51.987484 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:52.162759 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:52.163306 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:52.220368 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:52.488168 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:52.660875 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:52.662894 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:52.721040 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:52.987840 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:53.164304 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:53.164785 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:53.221047 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:53.493030 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:53.663204 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:53.663366 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:53.720051 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:53.987572 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:54.164082 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:54.164570 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:54.236253 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:54.488056 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:54.662082 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:54.663596 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:54.720445 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:54.987769 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:55.161668 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:55.164787 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:55.220961 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:55.488169 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:55.662349 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:55.664477 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:55.720412 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:55.991901 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:56.163663 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:56.165083 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:56.221385 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:56.489083 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:56.663610 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:56.663989 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:56.722950 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:56.989478 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:57.163678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:57.163891 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:57.220634 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:57.488676 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:57.663590 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:57.673119 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:57.720533 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:57.988259 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:58.164203 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:58.164632 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:58.220918 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:58.488653 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:58.661005 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:58.663693 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:58.722387 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:58.988819 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:59.164787 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:59.164890 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:59.222749 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:59.492007 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:59.663959 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:59.664085 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:59.721938 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:59.987295 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:00.161878 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:00.164949 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:00.223410 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:00.492239 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:00.663783 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:00.664559 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:00.728411 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:00.987710 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:01.163629 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:01.163774 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:01.221319 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:01.493808 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:01.671514 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:01.673698 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:01.723900 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:01.988053 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:02.164058 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:02.164338 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:02.221088 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:02.487382 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:02.663057 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:02.663710 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:02.720884 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:02.987383 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:03.164037 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:03.164271 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:03.220474 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:03.487456 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:03.662629 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:03.662832 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:03.720978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:03.987598 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:04.161827 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:04.162000 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:04.220927 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:04.487915 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:04.664400 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:04.664791 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:04.719652 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:04.987902 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:05.162443 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:05.164026 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:05.219975 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:05.487011 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:05.661318 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:05.662830 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:05.721553 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:05.987416 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:06.160747 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:06.162877 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:06.221096 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:06.487916 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:06.664241 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:06.665880 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:06.721419 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:06.988555 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:07.162422 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:07.162591 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:07.221049 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:07.488411 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:07.661722 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:07.662542 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:07.720583 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:07.987761 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:08.162245 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:08.163236 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:08.220907 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:08.488535 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:08.663746 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:08.663947 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:08.720978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:08.987469 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:09.162541 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:09.165707 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:09.221276 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:09.487856 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:09.661791 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:09.663387 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:09.722022 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:09.988284 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:10.164184 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:10.165461 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:10.221334 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:10.491437 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:10.662015 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:10.662109 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:10.720198 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:10.987984 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:11.161237 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:11.164802 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:11.220742 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:11.489300 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:11.662077 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:11.662504 1137611 kapi.go:107] duration metric: took 1m12.004720954s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 00:31:11.720952 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:11.987512 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:12.162191 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:12.220111 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:12.487884 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:12.664785 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:12.764230 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:12.987391 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:13.164963 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:13.221054 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:13.494317 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:13.663010 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:13.721294 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:13.987973 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:14.162587 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:14.220527 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:14.488168 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:14.663286 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:14.720174 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:14.995520 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:15.161884 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:15.220466 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:15.491566 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:15.662305 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:15.720212 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:15.988737 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:16.162509 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:16.220528 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:16.495678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:16.664647 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:16.720834 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:16.988659 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:17.163544 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:17.220654 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:17.487734 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:17.662610 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:17.720525 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:17.987291 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:18.162754 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:18.221167 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:18.488528 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:18.663689 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:18.720816 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:18.989947 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:19.162120 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:19.220477 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:19.488300 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:19.663046 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:19.720244 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:19.988343 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:20.164393 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:20.221309 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:20.488619 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:20.661890 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:20.721844 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:20.995222 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:21.167173 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:21.221128 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:21.488003 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:21.663217 1137611 kapi.go:107] duration metric: took 1m22.004504854s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 00:31:21.720761 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:21.988089 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:22.222687 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:22.490484 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:22.720543 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:22.988057 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:23.220125 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:23.488409 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:23.720114 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:23.987645 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:24.220733 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:24.487952 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:24.721079 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:24.989417 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:25.221655 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:25.487543 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:25.722746 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:25.989773 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:26.221412 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:26.489186 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:26.720919 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:26.988298 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:27.221133 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:27.489148 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:27.721439 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:27.990032 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:28.220601 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:28.488694 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:28.720861 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:28.987159 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:29.219957 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:29.487908 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:29.720967 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:29.987633 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:30.221460 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:30.487728 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:30.722057 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:30.987737 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:31.221247 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:31.490959 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:31.721752 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:31.986828 1137611 kapi.go:107] duration metric: took 1m32.003033713s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 00:31:32.221367 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:32.720645 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:33.220035 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:33.721158 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:34.220456 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:34.721116 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:35.220455 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:35.720498 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:36.220995 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:36.720385 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:37.221547 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:37.720311 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:38.222018 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:38.720591 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:39.220245 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:39.720592 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:40.221094 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:40.720865 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:41.220280 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:41.720893 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:42.249469 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:42.723837 1137611 kapi.go:107] duration metric: took 1m39.006753771s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 00:31:42.726825 1137611 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-219291 cluster.
	I1217 00:31:42.731327 1137611 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 00:31:42.734157 1137611 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 00:31:42.737330 1137611 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, storage-provisioner, registry-creds, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1217 00:31:42.740178 1137611 addons.go:530] duration metric: took 1m49.537746606s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner storage-provisioner registry-creds inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1217 00:31:42.740222 1137611 start.go:247] waiting for cluster config update ...
	I1217 00:31:42.740245 1137611 start.go:256] writing updated cluster config ...
	I1217 00:31:42.740576 1137611 ssh_runner.go:195] Run: rm -f paused
	I1217 00:31:42.745168 1137611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:31:42.822108 1137611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2l8cm" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.827179 1137611 pod_ready.go:94] pod "coredns-66bc5c9577-2l8cm" is "Ready"
	I1217 00:31:42.827207 1137611 pod_ready.go:86] duration metric: took 5.067736ms for pod "coredns-66bc5c9577-2l8cm" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.829418 1137611 pod_ready.go:83] waiting for pod "etcd-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.833653 1137611 pod_ready.go:94] pod "etcd-addons-219291" is "Ready"
	I1217 00:31:42.833678 1137611 pod_ready.go:86] duration metric: took 4.235444ms for pod "etcd-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.835880 1137611 pod_ready.go:83] waiting for pod "kube-apiserver-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.840725 1137611 pod_ready.go:94] pod "kube-apiserver-addons-219291" is "Ready"
	I1217 00:31:42.840753 1137611 pod_ready.go:86] duration metric: took 4.848287ms for pod "kube-apiserver-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.843265 1137611 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.149178 1137611 pod_ready.go:94] pod "kube-controller-manager-addons-219291" is "Ready"
	I1217 00:31:43.149210 1137611 pod_ready.go:86] duration metric: took 305.915896ms for pod "kube-controller-manager-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.349803 1137611 pod_ready.go:83] waiting for pod "kube-proxy-2c69d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.750569 1137611 pod_ready.go:94] pod "kube-proxy-2c69d" is "Ready"
	I1217 00:31:43.750597 1137611 pod_ready.go:86] duration metric: took 400.767024ms for pod "kube-proxy-2c69d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.957001 1137611 pod_ready.go:83] waiting for pod "kube-scheduler-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:44.349688 1137611 pod_ready.go:94] pod "kube-scheduler-addons-219291" is "Ready"
	I1217 00:31:44.349718 1137611 pod_ready.go:86] duration metric: took 392.691672ms for pod "kube-scheduler-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:44.349733 1137611 pod_ready.go:40] duration metric: took 1.604533685s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:31:44.406250 1137611 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1217 00:31:44.410035 1137611 out.go:179] * Done! kubectl is now configured to use "addons-219291" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 00:33:48 addons-219291 crio[829]: time="2025-12-17T00:33:48.066846672Z" level=info msg="Removed pod sandbox: a1ae4a3778f05fb06bbc6bb0f48f0c1e67c7c1e69ffa0a65098be46bcaae7ab0" id=9268634b-5d8f-43bf-8f50-f7f1bfed7b3b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.041411942Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-vcgm6/POD" id=78633513-ae3c-4629-8b49-be7b500fc685 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.04150904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.090251323Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-vcgm6 Namespace:default ID:0853e8aa4ec8635ba32c9048dfcbdc80d76f74079ca3aa78932a6c7f571a2c1e UID:68d62fb9-f074-4c57-808c-5970977428cc NetNS:/var/run/netns/e689ca07-54b3-4cf6-accd-e037a1ca5872 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400216a048}] Aliases:map[]}"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.090449769Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-vcgm6 to CNI network \"kindnet\" (type=ptp)"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.114898209Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-vcgm6 Namespace:default ID:0853e8aa4ec8635ba32c9048dfcbdc80d76f74079ca3aa78932a6c7f571a2c1e UID:68d62fb9-f074-4c57-808c-5970977428cc NetNS:/var/run/netns/e689ca07-54b3-4cf6-accd-e037a1ca5872 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400216a048}] Aliases:map[]}"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.115071482Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-vcgm6 for CNI network kindnet (type=ptp)"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.126385912Z" level=info msg="Ran pod sandbox 0853e8aa4ec8635ba32c9048dfcbdc80d76f74079ca3aa78932a6c7f571a2c1e with infra container: default/hello-world-app-5d498dc89-vcgm6/POD" id=78633513-ae3c-4629-8b49-be7b500fc685 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.127966505Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c16c17c0-c9d2-422b-aafc-52a5c0bb01d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.128284646Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c16c17c0-c9d2-422b-aafc-52a5c0bb01d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.128411577Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=c16c17c0-c9d2-422b-aafc-52a5c0bb01d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.132168159Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1b1ce8c4-244c-470a-9493-8767c59f5312 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.143596671Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.879853414Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=1b1ce8c4-244c-470a-9493-8767c59f5312 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.886350185Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f5d2e8f6-fa76-4d3c-9905-de7ac98861ef name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.888727475Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9a4e7536-986d-4c4f-ac49-bc180dee16b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.901680425Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-vcgm6/hello-world-app" id=2de57e66-fb6f-4124-a486-8ecd76b780e4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.901955038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.915036405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.915251089Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/007cd7ce38144dae33e236477de9a6c7d88d28e5a9b1f7dc071956982557794a/merged/etc/passwd: no such file or directory"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.915272914Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/007cd7ce38144dae33e236477de9a6c7d88d28e5a9b1f7dc071956982557794a/merged/etc/group: no such file or directory"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.91556772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.936889484Z" level=info msg="Created container 3dcb401b621ddecf9cdfef44beb957604112b4a876674dd18b9ffa89bb9d7a12: default/hello-world-app-5d498dc89-vcgm6/hello-world-app" id=2de57e66-fb6f-4124-a486-8ecd76b780e4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.938573624Z" level=info msg="Starting container: 3dcb401b621ddecf9cdfef44beb957604112b4a876674dd18b9ffa89bb9d7a12" id=4eac38b8-26d3-459b-8269-23b52d0c82ed name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:34:42 addons-219291 crio[829]: time="2025-12-17T00:34:42.941160699Z" level=info msg="Started container" PID=6905 containerID=3dcb401b621ddecf9cdfef44beb957604112b4a876674dd18b9ffa89bb9d7a12 description=default/hello-world-app-5d498dc89-vcgm6/hello-world-app id=4eac38b8-26d3-459b-8269-23b52d0c82ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=0853e8aa4ec8635ba32c9048dfcbdc80d76f74079ca3aa78932a6c7f571a2c1e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	3dcb401b621dd       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   0853e8aa4ec86       hello-world-app-5d498dc89-vcgm6             default
	ccd0b6f9485a6       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago            Running             nginx                                    0                   974bffaee3390       nginx                                       default
	6deb013053c81       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   ab53f463a1113       busybox                                     default
	bf4c29dbf6234       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   ee1386eb9013f       gcp-auth-78565c9fb4-8zdw7                   gcp-auth
	193890a73e001       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	1bbd4a6a667e4       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	4c9431192a983       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	fa0b6e31d74dd       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	8d6e374670dcd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	cad1a09616cb3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   9db5e8859117a       gadget-ml2x5                                gadget
	c8aaf29c36d11       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   a46bbee15e4c1       ingress-nginx-controller-85d4c799dd-rf2z8   ingress-nginx
	29ad784e8ed80       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   69a790bf7d75f       csi-hostpath-resizer-0                      kube-system
	a051b23901572       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	d8e39af946260       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   c88d024cce8b9       snapshot-controller-7d9fbc56b8-dbmhn        kube-system
	6a9e26980319f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   eb7f527a31979       registry-proxy-f4nhl                        kube-system
	b3b584a64d334       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   8197affe2e0ea       snapshot-controller-7d9fbc56b8-gwhl5        kube-system
	c0e9ccefa063f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   445dc29978230       csi-hostpath-attacher-0                     kube-system
	570b13b93f9ef       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   8c0c0c23548c3       yakd-dashboard-5ff678cb9-lmk68              yakd-dashboard
	137e2ee1d0566       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   70b723ad34385       local-path-provisioner-648f6765c9-49qhs     local-path-storage
	9733ba6e686c6       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   89f504f071526       kube-ingress-dns-minikube                   kube-system
	157b30139cf36       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              patch                                    0                   b1a122fa78517       ingress-nginx-admission-patch-fwl2h         ingress-nginx
	fa9db7f2a2a56       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   cb59264e22ea1       cloud-spanner-emulator-5bdddb765-qvdx4      default
	6fbc1aa1c1165       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   d757747ad8d60       registry-6b586f9694-zh49c                   kube-system
	6be3d66db02da       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   30eaadd845edc       nvidia-device-plugin-daemonset-86n5b        kube-system
	f9b83d4bb59ac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   4 minutes ago            Exited              create                                   0                   fdd85f8102eeb       ingress-nginx-admission-create-qw5z7        ingress-nginx
	7e472f122d8fb       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   d8796618f8955       metrics-server-85b7d694d7-h9vmz             kube-system
	ee62b48d5f8a8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   621b8fe0d93b6       storage-provisioner                         kube-system
	fa923421199e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   2c773b690730d       coredns-66bc5c9577-2l8cm                    kube-system
	d80c862e4d310       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   a9b03219e5a17       kindnet-6tjsd                               kube-system
	6111e6b00517f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   82a3e3b9ed862       kube-proxy-2c69d                            kube-system
	a43c51ac35173       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   a1060265c367f       kube-apiserver-addons-219291                kube-system
	641fd3059b8b5       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   45d40c0fc6561       kube-scheduler-addons-219291                kube-system
	d981f5abaaa97       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   5cf74d0f7ba7c       kube-controller-manager-addons-219291       kube-system
	d607d9f1296a5       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   ccbd3741ff720       etcd-addons-219291                          kube-system
	
	
	==> coredns [fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4] <==
	[INFO] 10.244.0.12:59793 - 1673 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001831997s
	[INFO] 10.244.0.12:59793 - 14456 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00010627s
	[INFO] 10.244.0.12:59793 - 7620 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000098361s
	[INFO] 10.244.0.12:41402 - 30993 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137859s
	[INFO] 10.244.0.12:41402 - 31470 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084995s
	[INFO] 10.244.0.12:46705 - 14255 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085963s
	[INFO] 10.244.0.12:46705 - 14067 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081836s
	[INFO] 10.244.0.12:53037 - 58130 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109708s
	[INFO] 10.244.0.12:53037 - 58600 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00032206s
	[INFO] 10.244.0.12:46852 - 2939 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001405029s
	[INFO] 10.244.0.12:46852 - 2759 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001778633s
	[INFO] 10.244.0.12:42786 - 40771 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000130622s
	[INFO] 10.244.0.12:42786 - 40607 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175306s
	[INFO] 10.244.0.21:39163 - 56174 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00018439s
	[INFO] 10.244.0.21:38038 - 16555 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000073089s
	[INFO] 10.244.0.21:49406 - 57259 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011907s
	[INFO] 10.244.0.21:35381 - 49278 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000070899s
	[INFO] 10.244.0.21:47165 - 43381 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007624s
	[INFO] 10.244.0.21:38303 - 6924 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064056s
	[INFO] 10.244.0.21:42406 - 47839 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002264668s
	[INFO] 10.244.0.21:45961 - 51427 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002581075s
	[INFO] 10.244.0.21:44627 - 25271 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002393937s
	[INFO] 10.244.0.21:48002 - 57596 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002023869s
	[INFO] 10.244.0.23:45912 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221295s
	[INFO] 10.244.0.23:44639 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000190535s
	
	
	==> describe nodes <==
	Name:               addons-219291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-219291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=addons-219291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_29_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-219291
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-219291"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-219291
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:34:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:33:12 +0000   Wed, 17 Dec 2025 00:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:33:12 +0000   Wed, 17 Dec 2025 00:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:33:12 +0000   Wed, 17 Dec 2025 00:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:33:12 +0000   Wed, 17 Dec 2025 00:30:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-219291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                0d8ad118-bc04-4408-bc07-b66d8fba29fb
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     cloud-spanner-emulator-5bdddb765-qvdx4       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  default                     hello-world-app-5d498dc89-vcgm6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-ml2x5                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  gcp-auth                    gcp-auth-78565c9fb4-8zdw7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-rf2z8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m44s
	  kube-system                 coredns-66bc5c9577-2l8cm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m50s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 csi-hostpathplugin-btcsg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 etcd-addons-219291                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m56s
	  kube-system                 kindnet-6tjsd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m50s
	  kube-system                 kube-apiserver-addons-219291                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-controller-manager-addons-219291        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-proxy-2c69d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-scheduler-addons-219291                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 metrics-server-85b7d694d7-h9vmz              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m45s
	  kube-system                 nvidia-device-plugin-daemonset-86n5b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 registry-6b586f9694-zh49c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 registry-creds-764b6fb674-h6f8z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 registry-proxy-f4nhl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 snapshot-controller-7d9fbc56b8-dbmhn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-gwhl5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  local-path-storage          local-path-provisioner-648f6765c9-49qhs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lmk68               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m49s                kube-proxy       
	  Warning  CgroupV1                 5m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node addons-219291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node addons-219291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s (x8 over 5m3s)  kubelet          Node addons-219291 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m55s                kubelet          Node addons-219291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s                kubelet          Node addons-219291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m55s                kubelet          Node addons-219291 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m51s                node-controller  Node addons-219291 event: Registered Node addons-219291 in Controller
	  Normal   NodeReady                4m9s                 kubelet          Node addons-219291 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 23:34] overlayfs: idmapped layers are currently not supported
	[Dec16 23:35] overlayfs: idmapped layers are currently not supported
	[Dec16 23:37] overlayfs: idmapped layers are currently not supported
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e] <==
	{"level":"warn","ts":"2025-12-17T00:29:43.954327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:43.973803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:43.979309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.003334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.041156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.067786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.106533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.135885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.164615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.194037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.209289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.273200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.295070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.319677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.341282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.369402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.385290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.404760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.496473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:00.744986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:00.769522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.278671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.300764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.328909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.345281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [bf4c29dbf62345eb6caaea8b0d74345f55043a98dd8d809c1936c697d1d5f891] <==
	2025/12/17 00:31:42 GCP Auth Webhook started!
	2025/12/17 00:31:44 Ready to marshal response ...
	2025/12/17 00:31:44 Ready to write response ...
	2025/12/17 00:31:45 Ready to marshal response ...
	2025/12/17 00:31:45 Ready to write response ...
	2025/12/17 00:31:45 Ready to marshal response ...
	2025/12/17 00:31:45 Ready to write response ...
	2025/12/17 00:32:05 Ready to marshal response ...
	2025/12/17 00:32:05 Ready to write response ...
	2025/12/17 00:32:09 Ready to marshal response ...
	2025/12/17 00:32:09 Ready to write response ...
	2025/12/17 00:32:09 Ready to marshal response ...
	2025/12/17 00:32:09 Ready to write response ...
	2025/12/17 00:32:18 Ready to marshal response ...
	2025/12/17 00:32:18 Ready to write response ...
	2025/12/17 00:32:21 Ready to marshal response ...
	2025/12/17 00:32:21 Ready to write response ...
	2025/12/17 00:32:37 Ready to marshal response ...
	2025/12/17 00:32:37 Ready to write response ...
	2025/12/17 00:32:50 Ready to marshal response ...
	2025/12/17 00:32:50 Ready to write response ...
	2025/12/17 00:34:41 Ready to marshal response ...
	2025/12/17 00:34:41 Ready to write response ...
	
	
	==> kernel <==
	 00:34:44 up  6:17,  0 user,  load average: 0.26, 1.26, 1.55
	Linux addons-219291 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3] <==
	I1217 00:32:34.435912       1 main.go:301] handling current node
	I1217 00:32:44.435831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:32:44.435941       1 main.go:301] handling current node
	I1217 00:32:54.435332       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:32:54.435366       1 main.go:301] handling current node
	I1217 00:33:04.436717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:33:04.436754       1 main.go:301] handling current node
	I1217 00:33:14.435639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:33:14.435674       1 main.go:301] handling current node
	I1217 00:33:24.444731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:33:24.444846       1 main.go:301] handling current node
	I1217 00:33:34.436192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:33:34.436227       1 main.go:301] handling current node
	I1217 00:33:44.444026       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:33:44.444060       1 main.go:301] handling current node
	I1217 00:33:54.444138       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:33:54.444246       1 main.go:301] handling current node
	I1217 00:34:04.441106       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:34:04.441142       1 main.go:301] handling current node
	I1217 00:34:14.444011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:34:14.444046       1 main.go:301] handling current node
	I1217 00:34:24.443268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:34:24.443322       1 main.go:301] handling current node
	I1217 00:34:34.440982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:34:34.441018       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4] <==
	W1217 00:30:22.324838       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 00:30:22.339823       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 00:30:35.034309       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.49.120:443: connect: connection refused
	E1217 00:30:35.036102       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.49.120:443: connect: connection refused" logger="UnhandledError"
	W1217 00:30:35.037160       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.49.120:443: connect: connection refused
	E1217 00:30:35.037769       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.49.120:443: connect: connection refused" logger="UnhandledError"
	W1217 00:30:35.115075       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.49.120:443: connect: connection refused
	E1217 00:30:35.115721       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.49.120:443: connect: connection refused" logger="UnhandledError"
	E1217 00:30:39.852377       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.181.61:443: connect: connection refused" logger="UnhandledError"
	W1217 00:30:39.852501       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:30:39.852556       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 00:30:39.852981       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.181.61:443: connect: connection refused" logger="UnhandledError"
	E1217 00:30:39.859552       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.181.61:443: connect: connection refused" logger="UnhandledError"
	I1217 00:30:39.980233       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 00:31:54.517303       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45186: use of closed network connection
	E1217 00:31:54.765004       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45218: use of closed network connection
	E1217 00:31:54.896659       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45248: use of closed network connection
	I1217 00:32:20.995761       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 00:32:21.281647       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.224.80"}
	I1217 00:32:43.226065       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1217 00:32:45.000765       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1217 00:34:41.926472       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.247.57"}
	
	
	==> kube-controller-manager [d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60] <==
	I1217 00:29:52.284931       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 00:29:52.285101       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 00:29:52.285254       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 00:29:52.286949       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 00:29:52.286998       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 00:29:52.289384       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 00:29:52.292064       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 00:29:52.292456       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:29:52.303775       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 00:29:52.312110       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:29:52.329489       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 00:29:52.332985       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:29:52.333007       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 00:29:52.333014       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 00:29:52.338339       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E1217 00:29:58.569614       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1217 00:29:58.588675       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1217 00:30:22.270355       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1217 00:30:22.270627       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 00:30:22.270706       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 00:30:22.304295       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1217 00:30:22.309115       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 00:30:22.371884       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:30:22.409666       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:30:37.295774       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2] <==
	I1217 00:29:54.271167       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:29:54.366518       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:29:54.496156       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:29:54.496196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 00:29:54.496276       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:29:54.571953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:29:54.572020       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:29:54.578157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:29:54.578484       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:29:54.578500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:29:54.579895       1 config.go:200] "Starting service config controller"
	I1217 00:29:54.579907       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:29:54.579922       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:29:54.579927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:29:54.579936       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:29:54.579940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:29:54.580869       1 config.go:309] "Starting node config controller"
	I1217 00:29:54.580878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:29:54.580886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:29:54.680034       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:29:54.680069       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:29:54.680094       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5] <==
	I1217 00:29:45.354625       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 00:29:45.376202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 00:29:45.394762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:29:45.394929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:29:45.395368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:29:45.395534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:29:45.395694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:29:45.395798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:29:45.395967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:29:45.396149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:29:45.396241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:29:45.396328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:29:45.396447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:29:45.396549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:29:45.396662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:29:45.396747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:29:45.396886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:29:45.396932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:29:45.397007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:29:45.398310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:29:46.261078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:29:46.343039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:29:46.362662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:29:46.457872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1217 00:29:48.956929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:32:52 addons-219291 kubelet[1282]: I1217 00:32:52.049870    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=2.049852078 podStartE2EDuration="2.049852078s" podCreationTimestamp="2025-12-17 00:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 00:32:52.048636788 +0000 UTC m=+184.307415523" watchObservedRunningTime="2025-12-17 00:32:52.049852078 +0000 UTC m=+184.308630813"
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.061375    1282 scope.go:117] "RemoveContainer" containerID="159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205"
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.071974    1282 scope.go:117] "RemoveContainer" containerID="159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205"
	Dec 17 00:32:59 addons-219291 kubelet[1282]: E1217 00:32:59.072500    1282 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205\": container with ID starting with 159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205 not found: ID does not exist" containerID="159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205"
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.072548    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205"} err="failed to get container status \"159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205\": rpc error: code = NotFound desc = could not find container \"159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205\": container with ID starting with 159a27f94b128186a218c7384723e1fd0382f1acbe02fd87064b026a5650c205 not found: ID does not exist"
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.118640    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^eb0e6a25-dadf-11f0-b96b-d296f49c5d20\") pod \"be7615ac-3468-4d57-89af-95b7a4a36d01\" (UID: \"be7615ac-3468-4d57-89af-95b7a4a36d01\") "
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.118700    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/be7615ac-3468-4d57-89af-95b7a4a36d01-gcp-creds\") pod \"be7615ac-3468-4d57-89af-95b7a4a36d01\" (UID: \"be7615ac-3468-4d57-89af-95b7a4a36d01\") "
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.118755    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffrgj\" (UniqueName: \"kubernetes.io/projected/be7615ac-3468-4d57-89af-95b7a4a36d01-kube-api-access-ffrgj\") pod \"be7615ac-3468-4d57-89af-95b7a4a36d01\" (UID: \"be7615ac-3468-4d57-89af-95b7a4a36d01\") "
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.119323    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be7615ac-3468-4d57-89af-95b7a4a36d01-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "be7615ac-3468-4d57-89af-95b7a4a36d01" (UID: "be7615ac-3468-4d57-89af-95b7a4a36d01"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.124726    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7615ac-3468-4d57-89af-95b7a4a36d01-kube-api-access-ffrgj" (OuterVolumeSpecName: "kube-api-access-ffrgj") pod "be7615ac-3468-4d57-89af-95b7a4a36d01" (UID: "be7615ac-3468-4d57-89af-95b7a4a36d01"). InnerVolumeSpecName "kube-api-access-ffrgj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.127451    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^eb0e6a25-dadf-11f0-b96b-d296f49c5d20" (OuterVolumeSpecName: "task-pv-storage") pod "be7615ac-3468-4d57-89af-95b7a4a36d01" (UID: "be7615ac-3468-4d57-89af-95b7a4a36d01"). InnerVolumeSpecName "pvc-5a394468-6809-4fe4-a5bd-96933c11993a". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.219607    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffrgj\" (UniqueName: \"kubernetes.io/projected/be7615ac-3468-4d57-89af-95b7a4a36d01-kube-api-access-ffrgj\") on node \"addons-219291\" DevicePath \"\""
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.219691    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-5a394468-6809-4fe4-a5bd-96933c11993a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^eb0e6a25-dadf-11f0-b96b-d296f49c5d20\") on node \"addons-219291\" "
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.219710    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/be7615ac-3468-4d57-89af-95b7a4a36d01-gcp-creds\") on node \"addons-219291\" DevicePath \"\""
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.225481    1282 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-5a394468-6809-4fe4-a5bd-96933c11993a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^eb0e6a25-dadf-11f0-b96b-d296f49c5d20") on node "addons-219291"
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.320351    1282 reconciler_common.go:299] "Volume detached for volume \"pvc-5a394468-6809-4fe4-a5bd-96933c11993a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^eb0e6a25-dadf-11f0-b96b-d296f49c5d20\") on node \"addons-219291\" DevicePath \"\""
	Dec 17 00:32:59 addons-219291 kubelet[1282]: I1217 00:32:59.883900    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7615ac-3468-4d57-89af-95b7a4a36d01" path="/var/lib/kubelet/pods/be7615ac-3468-4d57-89af-95b7a4a36d01/volumes"
	Dec 17 00:33:16 addons-219291 kubelet[1282]: I1217 00:33:16.880646    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-zh49c" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:33:23 addons-219291 kubelet[1282]: I1217 00:33:23.881788    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-86n5b" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:33:56 addons-219291 kubelet[1282]: I1217 00:33:56.880823    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-f4nhl" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:34:40 addons-219291 kubelet[1282]: I1217 00:34:40.880858    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-zh49c" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:34:41 addons-219291 kubelet[1282]: I1217 00:34:41.897274    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77bb4\" (UniqueName: \"kubernetes.io/projected/68d62fb9-f074-4c57-808c-5970977428cc-kube-api-access-77bb4\") pod \"hello-world-app-5d498dc89-vcgm6\" (UID: \"68d62fb9-f074-4c57-808c-5970977428cc\") " pod="default/hello-world-app-5d498dc89-vcgm6"
	Dec 17 00:34:41 addons-219291 kubelet[1282]: I1217 00:34:41.897342    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/68d62fb9-f074-4c57-808c-5970977428cc-gcp-creds\") pod \"hello-world-app-5d498dc89-vcgm6\" (UID: \"68d62fb9-f074-4c57-808c-5970977428cc\") " pod="default/hello-world-app-5d498dc89-vcgm6"
	Dec 17 00:34:42 addons-219291 kubelet[1282]: W1217 00:34:42.124574    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/crio-0853e8aa4ec8635ba32c9048dfcbdc80d76f74079ca3aa78932a6c7f571a2c1e WatchSource:0}: Error finding container 0853e8aa4ec8635ba32c9048dfcbdc80d76f74079ca3aa78932a6c7f571a2c1e: Status 404 returned error can't find the container with id 0853e8aa4ec8635ba32c9048dfcbdc80d76f74079ca3aa78932a6c7f571a2c1e
	Dec 17 00:34:43 addons-219291 kubelet[1282]: I1217 00:34:43.449071    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-vcgm6" podStartSLOduration=1.692819992 podStartE2EDuration="2.449052044s" podCreationTimestamp="2025-12-17 00:34:41 +0000 UTC" firstStartedPulling="2025-12-17 00:34:42.131235219 +0000 UTC m=+294.390013945" lastFinishedPulling="2025-12-17 00:34:42.88746727 +0000 UTC m=+295.146245997" observedRunningTime="2025-12-17 00:34:43.448361389 +0000 UTC m=+295.707140140" watchObservedRunningTime="2025-12-17 00:34:43.449052044 +0000 UTC m=+295.707830779"
	
	
	==> storage-provisioner [ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58] <==
	W1217 00:34:19.448626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:21.451287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:21.455697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:23.458647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:23.463111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:25.466039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:25.472895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:27.476320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:27.480750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:29.484038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:29.488940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:31.492561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:31.497305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:33.500399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:33.505323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:35.509081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:35.513754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:37.517518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:37.526072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:39.529963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:39.534431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:41.538278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:41.545452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:43.549689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:43.555876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-219291 -n addons-219291
helpers_test.go:270: (dbg) Run:  kubectl --context addons-219291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-219291 describe pod ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-219291 describe pod ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z: exit status 1 (276.923562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qw5z7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fwl2h" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-h6f8z" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-219291 describe pod ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (342.99705ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:34:45.393015 1146878 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:45.393867 1146878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:45.393921 1146878 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:45.393943 1146878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:45.394268 1146878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:34:45.394653 1146878 mustload.go:66] Loading cluster: addons-219291
	I1217 00:34:45.395160 1146878 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:34:45.395210 1146878 addons.go:622] checking whether the cluster is paused
	I1217 00:34:45.395362 1146878 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:34:45.395395 1146878 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:34:45.396013 1146878 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:34:45.420840 1146878 ssh_runner.go:195] Run: systemctl --version
	I1217 00:34:45.420899 1146878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:34:45.446913 1146878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:34:45.552355 1146878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:34:45.552552 1146878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:34:45.596868 1146878 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:34:45.596897 1146878 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:34:45.596903 1146878 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:34:45.596907 1146878 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:34:45.596911 1146878 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:34:45.596914 1146878 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:34:45.596918 1146878 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:34:45.596921 1146878 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:34:45.596928 1146878 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:34:45.596934 1146878 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:34:45.596937 1146878 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:34:45.596940 1146878 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:34:45.596943 1146878 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:34:45.596946 1146878 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:34:45.596950 1146878 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:34:45.596961 1146878 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:34:45.596974 1146878 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:34:45.596978 1146878 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:34:45.596981 1146878 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:34:45.596984 1146878 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:34:45.596989 1146878 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:34:45.596992 1146878 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:34:45.596996 1146878 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:34:45.596999 1146878 cri.go:89] found id: ""
	I1217 00:34:45.597062 1146878 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:34:45.623337 1146878 out.go:203] 
	W1217 00:34:45.626514 1146878 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:34:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:34:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:34:45.626545 1146878 out.go:285] * 
	* 
	W1217 00:34:45.635245 1146878 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:34:45.637609 1146878 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable ingress --alsologtostderr -v=1: exit status 11 (351.27144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:34:45.744345 1146993 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:45.745557 1146993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:45.745576 1146993 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:45.745583 1146993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:45.745945 1146993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:34:45.746327 1146993 mustload.go:66] Loading cluster: addons-219291
	I1217 00:34:45.746770 1146993 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:34:45.746787 1146993 addons.go:622] checking whether the cluster is paused
	I1217 00:34:45.746903 1146993 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:34:45.746916 1146993 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:34:45.747488 1146993 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:34:45.769435 1146993 ssh_runner.go:195] Run: systemctl --version
	I1217 00:34:45.769495 1146993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:34:45.790353 1146993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:34:45.899772 1146993 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:34:45.899873 1146993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:34:45.940993 1146993 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:34:45.941019 1146993 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:34:45.941025 1146993 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:34:45.941029 1146993 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:34:45.941033 1146993 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:34:45.941037 1146993 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:34:45.941040 1146993 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:34:45.941043 1146993 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:34:45.941047 1146993 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:34:45.941057 1146993 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:34:45.941061 1146993 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:34:45.941064 1146993 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:34:45.941067 1146993 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:34:45.941070 1146993 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:34:45.941074 1146993 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:34:45.941083 1146993 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:34:45.941091 1146993 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:34:45.941096 1146993 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:34:45.941100 1146993 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:34:45.941103 1146993 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:34:45.941108 1146993 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:34:45.941111 1146993 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:34:45.941114 1146993 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:34:45.941117 1146993 cri.go:89] found id: ""
	I1217 00:34:45.941174 1146993 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:34:45.976937 1146993 out.go:203] 
	W1217 00:34:45.980103 1146993 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:34:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:34:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:34:45.980134 1146993 out.go:285] * 
	* 
	W1217 00:34:45.989939 1146993 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:34:45.993025 1146993 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-ml2x5" [929ae456-a0cf-476a-b9d7-fc127c6c2245] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003160934s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (255.790722ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:33:06.647907 1145856 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:33:06.648759 1145856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:33:06.648802 1145856 out.go:374] Setting ErrFile to fd 2...
	I1217 00:33:06.648824 1145856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:33:06.649217 1145856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:33:06.649603 1145856 mustload.go:66] Loading cluster: addons-219291
	I1217 00:33:06.650656 1145856 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:33:06.650688 1145856 addons.go:622] checking whether the cluster is paused
	I1217 00:33:06.650852 1145856 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:33:06.650872 1145856 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:33:06.651425 1145856 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:33:06.668763 1145856 ssh_runner.go:195] Run: systemctl --version
	I1217 00:33:06.668822 1145856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:33:06.687657 1145856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:33:06.787433 1145856 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:33:06.787519 1145856 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:33:06.817138 1145856 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:33:06.817158 1145856 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:33:06.817163 1145856 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:33:06.817166 1145856 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:33:06.817169 1145856 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:33:06.817173 1145856 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:33:06.817176 1145856 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:33:06.817179 1145856 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:33:06.817182 1145856 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:33:06.817189 1145856 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:33:06.817192 1145856 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:33:06.817195 1145856 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:33:06.817198 1145856 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:33:06.817201 1145856 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:33:06.817205 1145856 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:33:06.817212 1145856 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:33:06.817215 1145856 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:33:06.817219 1145856 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:33:06.817222 1145856 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:33:06.817225 1145856 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:33:06.817230 1145856 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:33:06.817233 1145856 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:33:06.817236 1145856 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:33:06.817239 1145856 cri.go:89] found id: ""
	I1217 00:33:06.817292 1145856 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:33:06.834169 1145856 out.go:203] 
	W1217 00:33:06.837036 1145856 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:33:06.837069 1145856 out.go:285] * 
	* 
	W1217 00:33:06.845205 1145856 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:33:06.848119 1145856 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.942864ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003578545s
addons_test.go:465: (dbg) Run:  kubectl --context addons-219291 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (261.366392ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:20.435300 1144838 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:20.436147 1144838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:20.436162 1144838 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:20.436169 1144838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:20.436509 1144838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:32:20.436850 1144838 mustload.go:66] Loading cluster: addons-219291
	I1217 00:32:20.437273 1144838 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:20.437296 1144838 addons.go:622] checking whether the cluster is paused
	I1217 00:32:20.437445 1144838 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:20.437463 1144838 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:32:20.438018 1144838 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:32:20.456940 1144838 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:20.457000 1144838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:32:20.476660 1144838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:32:20.571330 1144838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:32:20.571435 1144838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:32:20.602538 1144838 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:32:20.602569 1144838 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:32:20.602586 1144838 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:32:20.602591 1144838 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:32:20.602612 1144838 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:32:20.602627 1144838 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:32:20.602631 1144838 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:32:20.602634 1144838 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:32:20.602638 1144838 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:32:20.602643 1144838 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:32:20.602672 1144838 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:32:20.602682 1144838 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:32:20.602685 1144838 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:32:20.602688 1144838 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:32:20.602691 1144838 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:32:20.602696 1144838 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:32:20.602715 1144838 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:32:20.602720 1144838 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:32:20.602723 1144838 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:32:20.602726 1144838 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:32:20.602732 1144838 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:32:20.602735 1144838 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:32:20.602738 1144838 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:32:20.602740 1144838 cri.go:89] found id: ""
	I1217 00:32:20.602804 1144838 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:32:20.618539 1144838 out.go:203] 
	W1217 00:32:20.621619 1144838 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:32:20.621648 1144838 out.go:285] * 
	* 
	W1217 00:32:20.629786 1144838 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:32:20.632762 1144838 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 00:32:18.666937 1136597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 00:32:18.671248 1136597 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 00:32:18.671275 1136597 kapi.go:107] duration metric: took 5.056094ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 5.066998ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-219291 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-219291 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [44a3eb2b-d06c-47a9-bfca-ec6679bbedfb] Pending
helpers_test.go:353: "task-pv-pod" [44a3eb2b-d06c-47a9-bfca-ec6679bbedfb] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003764349s
addons_test.go:574: (dbg) Run:  kubectl --context addons-219291 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-219291 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-219291 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-219291 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-219291 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-219291 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-219291 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [be7615ac-3468-4d57-89af-95b7a4a36d01] Pending
helpers_test.go:353: "task-pv-pod-restore" [be7615ac-3468-4d57-89af-95b7a4a36d01] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [be7615ac-3468-4d57-89af-95b7a4a36d01] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003541764s
addons_test.go:616: (dbg) Run:  kubectl --context addons-219291 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-219291 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-219291 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (447.457356ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:59.894709 1145753 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:59.895589 1145753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:59.895647 1145753 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:59.895677 1145753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:59.896042 1145753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:32:59.896473 1145753 mustload.go:66] Loading cluster: addons-219291
	I1217 00:32:59.896964 1145753 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:59.897014 1145753 addons.go:622] checking whether the cluster is paused
	I1217 00:32:59.897180 1145753 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:59.897213 1145753 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:32:59.897814 1145753 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:32:59.918819 1145753 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:59.918872 1145753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:32:59.937450 1145753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:33:00.071041 1145753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:33:00.071245 1145753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:33:00.245440 1145753 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:33:00.245533 1145753 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:33:00.245558 1145753 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:33:00.245583 1145753 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:33:00.245621 1145753 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:33:00.245648 1145753 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:33:00.245672 1145753 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:33:00.245698 1145753 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:33:00.245739 1145753 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:33:00.245778 1145753 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:33:00.245803 1145753 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:33:00.245831 1145753 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:33:00.245870 1145753 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:33:00.245911 1145753 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:33:00.245953 1145753 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:33:00.246007 1145753 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:33:00.246042 1145753 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:33:00.246075 1145753 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:33:00.246104 1145753 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:33:00.246127 1145753 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:33:00.246148 1145753 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:33:00.246183 1145753 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:33:00.246207 1145753 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:33:00.246226 1145753 cri.go:89] found id: ""
	I1217 00:33:00.246327 1145753 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:33:00.267414 1145753 out.go:203] 
	W1217 00:33:00.270561 1145753 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:33:00.270614 1145753 out.go:285] * 
	* 
	W1217 00:33:00.279959 1145753 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:33:00.283575 1145753 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (301.819151ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:33:00.375765 1145795 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:33:00.376882 1145795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:33:00.377003 1145795 out.go:374] Setting ErrFile to fd 2...
	I1217 00:33:00.377029 1145795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:33:00.377380 1145795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:33:00.379033 1145795 mustload.go:66] Loading cluster: addons-219291
	I1217 00:33:00.379621 1145795 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:33:00.379682 1145795 addons.go:622] checking whether the cluster is paused
	I1217 00:33:00.379886 1145795 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:33:00.379924 1145795 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:33:00.382237 1145795 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:33:00.402634 1145795 ssh_runner.go:195] Run: systemctl --version
	I1217 00:33:00.402698 1145795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:33:00.424352 1145795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:33:00.523638 1145795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:33:00.523738 1145795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:33:00.557966 1145795 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:33:00.557989 1145795 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:33:00.558004 1145795 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:33:00.558008 1145795 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:33:00.558011 1145795 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:33:00.558015 1145795 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:33:00.558039 1145795 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:33:00.558048 1145795 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:33:00.558051 1145795 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:33:00.558057 1145795 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:33:00.558061 1145795 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:33:00.558064 1145795 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:33:00.558067 1145795 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:33:00.558071 1145795 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:33:00.558074 1145795 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:33:00.558079 1145795 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:33:00.558086 1145795 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:33:00.558089 1145795 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:33:00.558093 1145795 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:33:00.558096 1145795 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:33:00.558101 1145795 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:33:00.558118 1145795 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:33:00.558121 1145795 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:33:00.558124 1145795 cri.go:89] found id: ""
	I1217 00:33:00.558194 1145795 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:33:00.573532 1145795 out.go:203] 
	W1217 00:33:00.576493 1145795 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:33:00.576534 1145795 out.go:285] * 
	* 
	W1217 00:33:00.584915 1145795 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:33:00.587938 1145795 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-219291 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-219291 --alsologtostderr -v=1: exit status 11 (258.56005ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:31:55.215636 1143635 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:31:55.216554 1143635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:31:55.216575 1143635 out.go:374] Setting ErrFile to fd 2...
	I1217 00:31:55.216582 1143635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:31:55.216863 1143635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:31:55.217160 1143635 mustload.go:66] Loading cluster: addons-219291
	I1217 00:31:55.217548 1143635 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:31:55.217567 1143635 addons.go:622] checking whether the cluster is paused
	I1217 00:31:55.217672 1143635 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:31:55.217689 1143635 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:31:55.218214 1143635 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:31:55.237348 1143635 ssh_runner.go:195] Run: systemctl --version
	I1217 00:31:55.237407 1143635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:31:55.255318 1143635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:31:55.351821 1143635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:31:55.351908 1143635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:31:55.385029 1143635 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:31:55.385057 1143635 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:31:55.385062 1143635 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:31:55.385075 1143635 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:31:55.385079 1143635 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:31:55.385083 1143635 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:31:55.385086 1143635 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:31:55.385091 1143635 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:31:55.385095 1143635 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:31:55.385101 1143635 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:31:55.385108 1143635 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:31:55.385111 1143635 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:31:55.385114 1143635 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:31:55.385118 1143635 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:31:55.385121 1143635 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:31:55.385127 1143635 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:31:55.385133 1143635 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:31:55.385137 1143635 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:31:55.385140 1143635 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:31:55.385143 1143635 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:31:55.385148 1143635 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:31:55.385151 1143635 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:31:55.385154 1143635 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:31:55.385157 1143635 cri.go:89] found id: ""
	I1217 00:31:55.385212 1143635 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:31:55.399851 1143635 out.go:203] 
	W1217 00:31:55.402836 1143635 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:31:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:31:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:31:55.402876 1143635 out.go:285] * 
	* 
	W1217 00:31:55.411942 1143635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:31:55.415116 1143635 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-219291 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-219291
helpers_test.go:244: (dbg) docker inspect addons-219291:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29",
	        "Created": "2025-12-17T00:29:23.60254559Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1138002,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:29:23.670211369Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/hosts",
	        "LogPath": "/var/lib/docker/containers/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29/c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29-json.log",
	        "Name": "/addons-219291",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-219291:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-219291",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4d712690c2bd7c70f3f7e57deb0771d8b251295319eb93781f198a09ab32f29",
	                "LowerDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eea91f5be89d6e22e3bc3fe4ed775c94b20ad9a637a30ed4d0d0c78e26558516/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-219291",
	                "Source": "/var/lib/docker/volumes/addons-219291/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-219291",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-219291",
	                "name.minikube.sigs.k8s.io": "addons-219291",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93b90d16bb3b47acf7c37c78d4acbd32aee1707d21a7cc33a012fe92373ae2a5",
	            "SandboxKey": "/var/run/docker/netns/93b90d16bb3b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33897"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33896"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-219291": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:ff:12:2e:03:72",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c734422b54cad368ea21c7b067862f0afc571c532d44186c4767bc4103a3f9d4",
	                    "EndpointID": "d027415df98d1b82093c3dbf222355a12ec44017dc2a80f7ee0f5362d77df53d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-219291",
	                        "c4d712690c2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-219291 -n addons-219291
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-219291 logs -n 25: (1.410546496s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-852471 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-852471   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-852471                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-852471   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ -o=json --download-only -p download-only-514568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-514568   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-514568                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-514568   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ -o=json --download-only -p download-only-633590 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-633590   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-633590                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-633590   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-852471                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-852471   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-514568                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-514568   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-633590                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-633590   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ --download-only -p download-docker-970516 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-970516 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ -p download-docker-970516                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-970516 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ --download-only -p binary-mirror-272608 --alsologtostderr --binary-mirror http://127.0.0.1:43199 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-272608   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ -p binary-mirror-272608                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-272608   │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ addons  │ enable dashboard -p addons-219291                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-219291                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ start   │ -p addons-219291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:31 UTC │
	│ addons  │ addons-219291 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:31 UTC │                     │
	│ addons  │ addons-219291 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-219291 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-219291          │ jenkins │ v1.37.0 │ 17 Dec 25 00:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:28:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:28:58.475482 1137611 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:28:58.475666 1137611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:58.475702 1137611 out.go:374] Setting ErrFile to fd 2...
	I1217 00:28:58.475723 1137611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:58.476172 1137611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:28:58.477518 1137611 out.go:368] Setting JSON to false
	I1217 00:28:58.478361 1137611 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22289,"bootTime":1765909050,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:28:58.478434 1137611 start.go:143] virtualization:  
	I1217 00:28:58.481881 1137611 out.go:179] * [addons-219291] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:28:58.485626 1137611 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:28:58.485833 1137611 notify.go:221] Checking for updates...
	I1217 00:28:58.491276 1137611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:28:58.494291 1137611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:28:58.497133 1137611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:28:58.500030 1137611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:28:58.502881 1137611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:28:58.506039 1137611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:28:58.531586 1137611 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:28:58.531733 1137611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:58.598751 1137611 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:28:58.588769647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:58.598862 1137611 docker.go:319] overlay module found
	I1217 00:28:58.603972 1137611 out.go:179] * Using the docker driver based on user configuration
	I1217 00:28:58.606929 1137611 start.go:309] selected driver: docker
	I1217 00:28:58.606957 1137611 start.go:927] validating driver "docker" against <nil>
	I1217 00:28:58.606971 1137611 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:28:58.607713 1137611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:58.668995 1137611 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:28:58.65893624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:58.669180 1137611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:28:58.669457 1137611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:28:58.672534 1137611 out.go:179] * Using Docker driver with root privileges
	I1217 00:28:58.675336 1137611 cni.go:84] Creating CNI manager for ""
	I1217 00:28:58.675405 1137611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:28:58.675421 1137611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:28:58.675518 1137611 start.go:353] cluster config:
	{Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 00:28:58.678717 1137611 out.go:179] * Starting "addons-219291" primary control-plane node in "addons-219291" cluster
	I1217 00:28:58.681570 1137611 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:28:58.684505 1137611 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:28:58.687384 1137611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:28:58.687429 1137611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 00:28:58.687441 1137611 cache.go:65] Caching tarball of preloaded images
	I1217 00:28:58.687488 1137611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:28:58.687525 1137611 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:28:58.687536 1137611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:28:58.687911 1137611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/config.json ...
	I1217 00:28:58.687945 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/config.json: {Name:mk097cd69bde9af0d62b4dab5c8cf7c444d62365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:28:58.703390 1137611 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:28:58.703535 1137611 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:28:58.703570 1137611 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 00:28:58.703575 1137611 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 00:28:58.703582 1137611 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 00:28:58.703587 1137611 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from local cache
	I1217 00:29:16.935006 1137611 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 from cached tarball
	I1217 00:29:16.935049 1137611 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:29:16.935103 1137611 start.go:360] acquireMachinesLock for addons-219291: {Name:mk7d4d51d983f82bba701a3615b816bcece5d275 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:29:16.935240 1137611 start.go:364] duration metric: took 110.094µs to acquireMachinesLock for "addons-219291"
	I1217 00:29:16.935272 1137611 start.go:93] Provisioning new machine with config: &{Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:29:16.935351 1137611 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:29:16.938777 1137611 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 00:29:16.939033 1137611 start.go:159] libmachine.API.Create for "addons-219291" (driver="docker")
	I1217 00:29:16.939074 1137611 client.go:173] LocalClient.Create starting
	I1217 00:29:16.939213 1137611 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem
	I1217 00:29:17.226582 1137611 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem
	I1217 00:29:17.413916 1137611 cli_runner.go:164] Run: docker network inspect addons-219291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:29:17.430131 1137611 cli_runner.go:211] docker network inspect addons-219291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:29:17.430234 1137611 network_create.go:284] running [docker network inspect addons-219291] to gather additional debugging logs...
	I1217 00:29:17.430260 1137611 cli_runner.go:164] Run: docker network inspect addons-219291
	W1217 00:29:17.446492 1137611 cli_runner.go:211] docker network inspect addons-219291 returned with exit code 1
	I1217 00:29:17.446537 1137611 network_create.go:287] error running [docker network inspect addons-219291]: docker network inspect addons-219291: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-219291 not found
	I1217 00:29:17.446560 1137611 network_create.go:289] output of [docker network inspect addons-219291]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-219291 not found
	
	** /stderr **
	I1217 00:29:17.446659 1137611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:29:17.462719 1137611 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001947a90}
	I1217 00:29:17.462758 1137611 network_create.go:124] attempt to create docker network addons-219291 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 00:29:17.462814 1137611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-219291 addons-219291
	I1217 00:29:17.525449 1137611 network_create.go:108] docker network addons-219291 192.168.49.0/24 created
	I1217 00:29:17.525488 1137611 kic.go:121] calculated static IP "192.168.49.2" for the "addons-219291" container
	I1217 00:29:17.525562 1137611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:29:17.540205 1137611 cli_runner.go:164] Run: docker volume create addons-219291 --label name.minikube.sigs.k8s.io=addons-219291 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:29:17.559071 1137611 oci.go:103] Successfully created a docker volume addons-219291
	I1217 00:29:17.559180 1137611 cli_runner.go:164] Run: docker run --rm --name addons-219291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-219291 --entrypoint /usr/bin/test -v addons-219291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:29:19.327922 1137611 cli_runner.go:217] Completed: docker run --rm --name addons-219291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-219291 --entrypoint /usr/bin/test -v addons-219291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.76869955s)
	I1217 00:29:19.327952 1137611 oci.go:107] Successfully prepared a docker volume addons-219291
	I1217 00:29:19.328002 1137611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:29:19.328017 1137611 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:29:19.328087 1137611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-219291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:29:23.528612 1137611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-219291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.200482419s)
	I1217 00:29:23.528649 1137611 kic.go:203] duration metric: took 4.200629083s to extract preloaded images to volume ...
	W1217 00:29:23.528789 1137611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 00:29:23.528915 1137611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:29:23.587229 1137611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-219291 --name addons-219291 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-219291 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-219291 --network addons-219291 --ip 192.168.49.2 --volume addons-219291:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:29:23.876165 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Running}}
	I1217 00:29:23.901675 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:23.922095 1137611 cli_runner.go:164] Run: docker exec addons-219291 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:29:23.981200 1137611 oci.go:144] the created container "addons-219291" has a running status.
	I1217 00:29:23.981236 1137611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa...
	I1217 00:29:24.108794 1137611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:29:24.136411 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:24.163852 1137611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:29:24.163879 1137611 kic_runner.go:114] Args: [docker exec --privileged addons-219291 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:29:24.229716 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:24.253893 1137611 machine.go:94] provisionDockerMachine start ...
	I1217 00:29:24.253997 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:24.278554 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:24.278875 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:24.278884 1137611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:29:24.279455 1137611 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56286->127.0.0.1:33893: read: connection reset by peer
	I1217 00:29:27.412313 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-219291
	
	I1217 00:29:27.412335 1137611 ubuntu.go:182] provisioning hostname "addons-219291"
	I1217 00:29:27.412409 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:27.431663 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:27.431993 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:27.432004 1137611 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-219291 && echo "addons-219291" | sudo tee /etc/hostname
	I1217 00:29:27.569734 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-219291
	
	I1217 00:29:27.569813 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:27.586573 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:27.586889 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:27.586912 1137611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-219291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-219291/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-219291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:29:27.716711 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:29:27.716739 1137611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:29:27.716762 1137611 ubuntu.go:190] setting up certificates
	I1217 00:29:27.716781 1137611 provision.go:84] configureAuth start
	I1217 00:29:27.716847 1137611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-219291
	I1217 00:29:27.736455 1137611 provision.go:143] copyHostCerts
	I1217 00:29:27.736534 1137611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:29:27.736650 1137611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:29:27.736705 1137611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:29:27.736754 1137611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.addons-219291 san=[127.0.0.1 192.168.49.2 addons-219291 localhost minikube]
	I1217 00:29:28.368211 1137611 provision.go:177] copyRemoteCerts
	I1217 00:29:28.368290 1137611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:29:28.368333 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:28.385274 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:28.480585 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:29:28.498025 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:29:28.516163 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:29:28.533796 1137611 provision.go:87] duration metric: took 816.987711ms to configureAuth
	I1217 00:29:28.533823 1137611 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:29:28.534011 1137611 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:29:28.534108 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:28.551174 1137611 main.go:143] libmachine: Using SSH client type: native
	I1217 00:29:28.551473 1137611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33893 <nil> <nil>}
	I1217 00:29:28.551486 1137611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:29:28.839881 1137611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:29:28.839907 1137611 machine.go:97] duration metric: took 4.585994146s to provisionDockerMachine
	I1217 00:29:28.839919 1137611 client.go:176] duration metric: took 11.90083791s to LocalClient.Create
	I1217 00:29:28.839933 1137611 start.go:167] duration metric: took 11.900903247s to libmachine.API.Create "addons-219291"
	I1217 00:29:28.839940 1137611 start.go:293] postStartSetup for "addons-219291" (driver="docker")
	I1217 00:29:28.839950 1137611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:29:28.840031 1137611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:29:28.840077 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:28.860967 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:28.960734 1137611 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:29:28.964300 1137611 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:29:28.964332 1137611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:29:28.964351 1137611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:29:28.964445 1137611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:29:28.964477 1137611 start.go:296] duration metric: took 124.53156ms for postStartSetup
	I1217 00:29:28.964825 1137611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-219291
	I1217 00:29:28.982820 1137611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/config.json ...
	I1217 00:29:28.983121 1137611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:29:28.983174 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:29.000632 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:29.093592 1137611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:29:29.098293 1137611 start.go:128] duration metric: took 12.162926732s to createHost
	I1217 00:29:29.098322 1137611 start.go:83] releasing machines lock for "addons-219291", held for 12.163067914s
	I1217 00:29:29.098406 1137611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-219291
	I1217 00:29:29.115379 1137611 ssh_runner.go:195] Run: cat /version.json
	I1217 00:29:29.115391 1137611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:29:29.115431 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:29.115454 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:29.135338 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:29.135923 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:29.324290 1137611 ssh_runner.go:195] Run: systemctl --version
	I1217 00:29:29.330603 1137611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:29:29.366794 1137611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:29:29.371235 1137611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:29:29.371318 1137611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:29:29.400111 1137611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 00:29:29.400185 1137611 start.go:496] detecting cgroup driver to use...
	I1217 00:29:29.400237 1137611 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:29:29.400320 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:29:29.418047 1137611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:29:29.430534 1137611 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:29:29.430602 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:29:29.448043 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:29:29.465618 1137611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:29:29.585931 1137611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:29:29.713860 1137611 docker.go:234] disabling docker service ...
	I1217 00:29:29.713980 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:29:29.735715 1137611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:29:29.748796 1137611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:29:29.874818 1137611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:29:30.016234 1137611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:29:30.043132 1137611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:29:30.063416 1137611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:29:30.063577 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.075262 1137611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:29:30.075373 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.086498 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.097618 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.109129 1137611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:29:30.117843 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.127124 1137611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.141693 1137611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:29:30.151096 1137611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:29:30.159312 1137611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:29:30.167161 1137611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:29:30.275269 1137611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:29:30.464222 1137611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:29:30.464367 1137611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:29:30.468247 1137611 start.go:564] Will wait 60s for crictl version
	I1217 00:29:30.468316 1137611 ssh_runner.go:195] Run: which crictl
	I1217 00:29:30.471882 1137611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:29:30.497522 1137611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:29:30.497672 1137611 ssh_runner.go:195] Run: crio --version
	I1217 00:29:30.526333 1137611 ssh_runner.go:195] Run: crio --version
	I1217 00:29:30.556996 1137611 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 00:29:30.559896 1137611 cli_runner.go:164] Run: docker network inspect addons-219291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:29:30.576214 1137611 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:29:30.579961 1137611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:29:30.589427 1137611 kubeadm.go:884] updating cluster {Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:29:30.589538 1137611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:29:30.589596 1137611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:29:30.622882 1137611 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:29:30.622902 1137611 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:29:30.622958 1137611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:29:30.648583 1137611 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:29:30.648659 1137611 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:29:30.648682 1137611 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 00:29:30.648795 1137611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-219291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:29:30.648904 1137611 ssh_runner.go:195] Run: crio config
	I1217 00:29:30.703965 1137611 cni.go:84] Creating CNI manager for ""
	I1217 00:29:30.703985 1137611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:29:30.704027 1137611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:29:30.704062 1137611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-219291 NodeName:addons-219291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:29:30.704237 1137611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-219291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:29:30.704329 1137611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:29:30.712118 1137611 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:29:30.712200 1137611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:29:30.719732 1137611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 00:29:30.732970 1137611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:29:30.746036 1137611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1217 00:29:30.758176 1137611 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:29:30.761520 1137611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:29:30.771061 1137611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:29:30.879904 1137611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:29:30.895215 1137611 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291 for IP: 192.168.49.2
	I1217 00:29:30.895279 1137611 certs.go:195] generating shared ca certs ...
	I1217 00:29:30.895314 1137611 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:30.895475 1137611 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:29:31.281739 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt ...
	I1217 00:29:31.281783 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt: {Name:mk92394348a9935f40213952c9d4fb2eda7e5498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.282003 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key ...
	I1217 00:29:31.282017 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key: {Name:mkda72fe613060d20a63f3cac79dba6dd39106e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.282127 1137611 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:29:31.457029 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt ...
	I1217 00:29:31.457068 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt: {Name:mk938c89539bbbe196bc596f338cfab76af6c380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.457349 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key ...
	I1217 00:29:31.457367 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key: {Name:mkfeda5155b13c1f96e38fb52393a3561bb8db26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.457475 1137611 certs.go:257] generating profile certs ...
	I1217 00:29:31.457548 1137611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.key
	I1217 00:29:31.457566 1137611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt with IP's: []
	I1217 00:29:31.701779 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt ...
	I1217 00:29:31.701817 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: {Name:mk88410de71422f0cc13c1a134a421e6c8a8bc16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.702052 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.key ...
	I1217 00:29:31.702067 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.key: {Name:mkb3ddac70f38b743f729f188f949c6969ac609d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.702177 1137611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49
	I1217 00:29:31.702200 1137611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 00:29:31.800081 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49 ...
	I1217 00:29:31.800113 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49: {Name:mkf055a340a300d8c006061c460891ea08ff8a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.800324 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49 ...
	I1217 00:29:31.800342 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49: {Name:mk79dc3e4ab29523777a4fa252cc6966bb976355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:31.800449 1137611 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt.62262c49 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt
	I1217 00:29:31.800535 1137611 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key.62262c49 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key
	I1217 00:29:31.800594 1137611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key
	I1217 00:29:31.800618 1137611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt with IP's: []
	I1217 00:29:32.020248 1137611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt ...
	I1217 00:29:32.020283 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt: {Name:mk34d2a918d25278db931914d99e52a66d5a9615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:32.020518 1137611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key ...
	I1217 00:29:32.020543 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key: {Name:mk7ef58b405dea739bf6abc0d863015d75941ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:32.020734 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:29:32.020781 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:29:32.020816 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:29:32.020845 1137611 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:29:32.021505 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:29:32.041678 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:29:32.065429 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:29:32.085479 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:29:32.103497 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:29:32.121631 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:29:32.141504 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:29:32.160920 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:29:32.178686 1137611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:29:32.196567 1137611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:29:32.209487 1137611 ssh_runner.go:195] Run: openssl version
	I1217 00:29:32.215724 1137611 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.223105 1137611 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:29:32.230605 1137611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.234502 1137611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.234573 1137611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:29:32.275767 1137611 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:29:32.283372 1137611 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:29:32.290921 1137611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:29:32.294705 1137611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:29:32.294759 1137611 kubeadm.go:401] StartCluster: {Name:addons-219291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-219291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:29:32.294847 1137611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:29:32.294914 1137611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:29:32.321773 1137611 cri.go:89] found id: ""
	I1217 00:29:32.321880 1137611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:29:32.330432 1137611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:29:32.338128 1137611 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:29:32.338229 1137611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:29:32.345903 1137611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:29:32.345923 1137611 kubeadm.go:158] found existing configuration files:
	
	I1217 00:29:32.345992 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:29:32.353924 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:29:32.354046 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:29:32.361554 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:29:32.369329 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:29:32.369424 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:29:32.376983 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:29:32.384562 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:29:32.384629 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:29:32.392132 1137611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:29:32.399834 1137611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:29:32.399954 1137611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:29:32.407247 1137611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:29:32.447089 1137611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:29:32.447152 1137611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:29:32.487326 1137611 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:29:32.487413 1137611 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:29:32.487452 1137611 kubeadm.go:319] OS: Linux
	I1217 00:29:32.487506 1137611 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:29:32.487559 1137611 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:29:32.487619 1137611 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:29:32.487680 1137611 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:29:32.487737 1137611 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:29:32.487795 1137611 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:29:32.487855 1137611 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:29:32.487918 1137611 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:29:32.487972 1137611 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:29:32.565314 1137611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:29:32.565519 1137611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:29:32.565657 1137611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:29:32.573198 1137611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:29:32.580041 1137611 out.go:252]   - Generating certificates and keys ...
	I1217 00:29:32.580171 1137611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:29:32.580255 1137611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:29:33.056240 1137611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:29:34.056761 1137611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:29:34.189451 1137611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:29:34.266607 1137611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:29:34.752860 1137611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:29:34.753055 1137611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-219291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:29:35.373413 1137611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:29:35.373568 1137611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-219291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:29:35.908836 1137611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:29:37.145510 1137611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:29:37.699579 1137611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:29:37.699866 1137611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:29:37.835740 1137611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:29:38.221952 1137611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:29:38.539901 1137611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:29:39.500364 1137611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:29:39.954127 1137611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:29:39.954830 1137611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:29:39.957632 1137611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:29:39.961029 1137611 out.go:252]   - Booting up control plane ...
	I1217 00:29:39.961186 1137611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:29:39.961292 1137611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:29:39.961372 1137611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:29:39.988595 1137611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:29:39.988716 1137611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:29:39.997109 1137611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:29:39.997486 1137611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:29:39.997816 1137611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:29:40.153071 1137611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:29:40.153195 1137611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:29:40.652534 1137611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.762751ms
	I1217 00:29:40.655968 1137611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:29:40.656061 1137611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 00:29:40.656151 1137611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:29:40.656229 1137611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:29:44.538120 1137611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.881713205s
	I1217 00:29:45.380161 1137611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.724193711s
	I1217 00:29:47.157802 1137611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501681113s
	I1217 00:29:47.189826 1137611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:29:47.205166 1137611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:29:47.222886 1137611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:29:47.223107 1137611 kubeadm.go:319] [mark-control-plane] Marking the node addons-219291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:29:47.238189 1137611 kubeadm.go:319] [bootstrap-token] Using token: 5b0eqd.nc2k7xajx6gxbf2i
	I1217 00:29:47.241254 1137611 out.go:252]   - Configuring RBAC rules ...
	I1217 00:29:47.241393 1137611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:29:47.250387 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:29:47.263593 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:29:47.268194 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:29:47.272806 1137611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:29:47.278070 1137611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:29:47.565001 1137611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:29:48.014413 1137611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:29:48.564932 1137611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:29:48.566514 1137611 kubeadm.go:319] 
	I1217 00:29:48.566608 1137611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:29:48.566619 1137611 kubeadm.go:319] 
	I1217 00:29:48.566706 1137611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:29:48.566717 1137611 kubeadm.go:319] 
	I1217 00:29:48.566749 1137611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:29:48.566822 1137611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:29:48.566885 1137611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:29:48.566900 1137611 kubeadm.go:319] 
	I1217 00:29:48.566964 1137611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:29:48.566973 1137611 kubeadm.go:319] 
	I1217 00:29:48.567028 1137611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:29:48.567037 1137611 kubeadm.go:319] 
	I1217 00:29:48.567093 1137611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:29:48.567178 1137611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:29:48.567255 1137611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:29:48.567263 1137611 kubeadm.go:319] 
	I1217 00:29:48.567350 1137611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:29:48.567434 1137611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:29:48.567442 1137611 kubeadm.go:319] 
	I1217 00:29:48.567539 1137611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5b0eqd.nc2k7xajx6gxbf2i \
	I1217 00:29:48.567672 1137611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 \
	I1217 00:29:48.567702 1137611 kubeadm.go:319] 	--control-plane 
	I1217 00:29:48.567710 1137611 kubeadm.go:319] 
	I1217 00:29:48.567813 1137611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:29:48.567821 1137611 kubeadm.go:319] 
	I1217 00:29:48.567913 1137611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5b0eqd.nc2k7xajx6gxbf2i \
	I1217 00:29:48.568018 1137611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 
	I1217 00:29:48.570852 1137611 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 00:29:48.571117 1137611 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:29:48.571236 1137611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:29:48.571261 1137611 cni.go:84] Creating CNI manager for ""
	I1217 00:29:48.571269 1137611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:29:48.574595 1137611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 00:29:48.577523 1137611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 00:29:48.582113 1137611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 00:29:48.582136 1137611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 00:29:48.596346 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 00:29:48.897636 1137611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:29:48.897720 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:48.897789 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-219291 minikube.k8s.io/updated_at=2025_12_17T00_29_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=addons-219291 minikube.k8s.io/primary=true
	I1217 00:29:49.099267 1137611 ops.go:34] apiserver oom_adj: -16
	I1217 00:29:49.099377 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:49.599498 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:50.099642 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:50.600064 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:51.099580 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:51.599922 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:52.100163 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:52.600464 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:53.100196 1137611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:29:53.201230 1137611 kubeadm.go:1114] duration metric: took 4.303571467s to wait for elevateKubeSystemPrivileges
	I1217 00:29:53.201263 1137611 kubeadm.go:403] duration metric: took 20.906508288s to StartCluster
	I1217 00:29:53.201282 1137611 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:53.201402 1137611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:29:53.201798 1137611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:29:53.202004 1137611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:29:53.202143 1137611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:29:53.202386 1137611 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:29:53.202426 1137611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 00:29:53.202508 1137611 addons.go:70] Setting yakd=true in profile "addons-219291"
	I1217 00:29:53.202529 1137611 addons.go:239] Setting addon yakd=true in "addons-219291"
	I1217 00:29:53.202552 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.203009 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.203509 1137611 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-219291"
	I1217 00:29:53.203534 1137611 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-219291"
	I1217 00:29:53.203557 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.203987 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.204135 1137611 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-219291"
	I1217 00:29:53.204158 1137611 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-219291"
	I1217 00:29:53.204183 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.204648 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.207324 1137611 addons.go:70] Setting cloud-spanner=true in profile "addons-219291"
	I1217 00:29:53.207360 1137611 addons.go:239] Setting addon cloud-spanner=true in "addons-219291"
	I1217 00:29:53.207397 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.207455 1137611 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-219291"
	I1217 00:29:53.207502 1137611 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-219291"
	I1217 00:29:53.207527 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.207916 1137611 addons.go:70] Setting registry=true in profile "addons-219291"
	I1217 00:29:53.207964 1137611 addons.go:239] Setting addon registry=true in "addons-219291"
	I1217 00:29:53.207979 1137611 addons.go:70] Setting gcp-auth=true in profile "addons-219291"
	I1217 00:29:53.208009 1137611 mustload.go:66] Loading cluster: addons-219291
	I1217 00:29:53.208043 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.208150 1137611 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:29:53.208349 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.208598 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.207969 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.220118 1137611 addons.go:70] Setting ingress=true in profile "addons-219291"
	I1217 00:29:53.220151 1137611 addons.go:239] Setting addon ingress=true in "addons-219291"
	I1217 00:29:53.220209 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.220778 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.221499 1137611 addons.go:70] Setting ingress-dns=true in profile "addons-219291"
	I1217 00:29:53.221532 1137611 addons.go:239] Setting addon ingress-dns=true in "addons-219291"
	I1217 00:29:53.221617 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.222307 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.207974 1137611 addons.go:70] Setting default-storageclass=true in profile "addons-219291"
	I1217 00:29:53.230765 1137611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-219291"
	I1217 00:29:53.230961 1137611 addons.go:70] Setting registry-creds=true in profile "addons-219291"
	I1217 00:29:53.230986 1137611 addons.go:239] Setting addon registry-creds=true in "addons-219291"
	I1217 00:29:53.231015 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.231419 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.235502 1137611 addons.go:70] Setting inspektor-gadget=true in profile "addons-219291"
	I1217 00:29:53.235537 1137611 addons.go:239] Setting addon inspektor-gadget=true in "addons-219291"
	I1217 00:29:53.235626 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.236251 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.251969 1137611 addons.go:70] Setting storage-provisioner=true in profile "addons-219291"
	I1217 00:29:53.252011 1137611 addons.go:239] Setting addon storage-provisioner=true in "addons-219291"
	I1217 00:29:53.252047 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.252552 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.256406 1137611 addons.go:70] Setting metrics-server=true in profile "addons-219291"
	I1217 00:29:53.256461 1137611 addons.go:239] Setting addon metrics-server=true in "addons-219291"
	I1217 00:29:53.256495 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.256977 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.275958 1137611 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-219291"
	I1217 00:29:53.276255 1137611 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-219291"
	I1217 00:29:53.280669 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.287998 1137611 out.go:179] * Verifying Kubernetes components...
	I1217 00:29:53.292786 1137611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:29:53.293452 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.310477 1137611 addons.go:70] Setting volcano=true in profile "addons-219291"
	I1217 00:29:53.310558 1137611 addons.go:239] Setting addon volcano=true in "addons-219291"
	I1217 00:29:53.310609 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.311116 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.336245 1137611 addons.go:70] Setting volumesnapshots=true in profile "addons-219291"
	I1217 00:29:53.336348 1137611 addons.go:239] Setting addon volumesnapshots=true in "addons-219291"
	I1217 00:29:53.336410 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.337461 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.350452 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.371652 1137611 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 00:29:53.374612 1137611 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:29:53.374635 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 00:29:53.374705 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.460463 1137611 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 00:29:53.466659 1137611 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 00:29:53.470947 1137611 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1217 00:29:53.478359 1137611 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 00:29:53.487014 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 00:29:53.487044 1137611 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 00:29:53.487126 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.501150 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 00:29:53.501627 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 00:29:53.503449 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.511215 1137611 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:29:53.511236 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 00:29:53.511304 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.514149 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 00:29:53.514402 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.517562 1137611 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1217 00:29:53.519060 1137611 addons.go:239] Setting addon default-storageclass=true in "addons-219291"
	I1217 00:29:53.519097 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.519512 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	W1217 00:29:53.536867 1137611 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 00:29:53.543360 1137611 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-219291"
	I1217 00:29:53.543414 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:29:53.543893 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:29:53.554483 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 00:29:53.554759 1137611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:29:53.555792 1137611 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:29:53.555810 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 00:29:53.555863 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.561003 1137611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:29:53.561022 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:29:53.561100 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.577450 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 00:29:53.577866 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:29:53.581814 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 00:29:53.601503 1137611 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 00:29:53.609425 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:29:53.609655 1137611 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:29:53.609701 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 00:29:53.609803 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.616665 1137611 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:29:53.616729 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 00:29:53.616820 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.623983 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 00:29:53.624058 1137611 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 00:29:53.624150 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.626833 1137611 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 00:29:53.641537 1137611 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 00:29:53.641632 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 00:29:53.641747 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.665939 1137611 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 00:29:53.676628 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 00:29:53.676816 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:29:53.676871 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 00:29:53.677003 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.678637 1137611 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 00:29:53.685984 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 00:29:53.686132 1137611 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 00:29:53.686294 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.698461 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.699695 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 00:29:53.704628 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 00:29:53.712547 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 00:29:53.716672 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 00:29:53.718789 1137611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:29:53.718808 1137611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:29:53.718870 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.725522 1137611 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 00:29:53.731549 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 00:29:53.731576 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 00:29:53.731652 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.754078 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.757220 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.764131 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.786642 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.792175 1137611 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 00:29:53.796826 1137611 out.go:179]   - Using image docker.io/busybox:stable
	I1217 00:29:53.799784 1137611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:29:53.799812 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 00:29:53.799876 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:29:53.827102 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.840580 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.895477 1137611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:29:53.895667 1137611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:29:53.907000 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.917009 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.917477 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.924718 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.930747 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.930747 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.938509 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:53.944293 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:29:54.501877 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:29:54.541528 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 00:29:54.541596 1137611 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 00:29:54.565306 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:29:54.611642 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:29:54.615534 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:29:54.618222 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 00:29:54.618246 1137611 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 00:29:54.685566 1137611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 00:29:54.685596 1137611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 00:29:54.689391 1137611 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:29:54.689417 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 00:29:54.701945 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 00:29:54.701973 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 00:29:54.715291 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:29:54.727290 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 00:29:54.782972 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 00:29:54.782999 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 00:29:54.789931 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:29:54.793930 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:29:54.805234 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:29:54.808430 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 00:29:54.808456 1137611 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 00:29:54.842412 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:29:54.913962 1137611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 00:29:54.913991 1137611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 00:29:54.943454 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 00:29:54.943487 1137611 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 00:29:54.945910 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:29:55.033431 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 00:29:55.033457 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 00:29:55.080994 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 00:29:55.081016 1137611 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 00:29:55.147051 1137611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:29:55.147088 1137611 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 00:29:55.163299 1137611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 00:29:55.163327 1137611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 00:29:55.252538 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 00:29:55.252567 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 00:29:55.261716 1137611 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:29:55.261760 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 00:29:55.325884 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 00:29:55.325915 1137611 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 00:29:55.340903 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:29:55.429020 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:29:55.441800 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 00:29:55.441828 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 00:29:55.523159 1137611 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:29:55.523193 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 00:29:55.639845 1137611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 00:29:55.639874 1137611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 00:29:55.725624 1137611 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.83010308s)
	I1217 00:29:55.725681 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.223736317s)
	I1217 00:29:55.725817 1137611 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.830136901s)
	I1217 00:29:55.725835 1137611 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 00:29:55.727158 1137611 node_ready.go:35] waiting up to 6m0s for node "addons-219291" to be "Ready" ...
	I1217 00:29:55.856100 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:29:55.959028 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 00:29:55.959054 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 00:29:56.232472 1137611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-219291" context rescaled to 1 replicas
	I1217 00:29:56.234776 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 00:29:56.234815 1137611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 00:29:56.372878 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 00:29:56.372919 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 00:29:56.649174 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 00:29:56.649248 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 00:29:56.988484 1137611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:29:56.988553 1137611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 00:29:57.137192 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1217 00:29:57.735333 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:29:59.649164 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.033558926s)
	I1217 00:29:59.649264 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.933946846s)
	I1217 00:29:59.649499 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.922182506s)
	I1217 00:29:59.649572 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.859612721s)
	I1217 00:29:59.649625 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.855670742s)
	I1217 00:29:59.649690 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.844433545s)
	I1217 00:29:59.649716 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.807282224s)
	I1217 00:29:59.649844 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.703903234s)
	I1217 00:29:59.649863 1137611 addons.go:495] Verifying addon registry=true in "addons-219291"
	I1217 00:29:59.649989 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.037394855s)
	I1217 00:29:59.649974 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.084596227s)
	I1217 00:29:59.650075 1137611 addons.go:495] Verifying addon ingress=true in "addons-219291"
	I1217 00:29:59.650129 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.30919103s)
	I1217 00:29:59.650151 1137611 addons.go:495] Verifying addon metrics-server=true in "addons-219291"
	I1217 00:29:59.650197 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.221139258s)
	I1217 00:29:59.650610 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.794480844s)
	W1217 00:29:59.651017 1137611 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:29:59.651045 1137611 retry.go:31] will retry after 249.132448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:29:59.652891 1137611 out.go:179] * Verifying registry addon...
	I1217 00:29:59.652992 1137611 out.go:179] * Verifying ingress addon...
	I1217 00:29:59.655016 1137611 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-219291 service yakd-dashboard -n yakd-dashboard
	
	I1217 00:29:59.657784 1137611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 00:29:59.658706 1137611 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 00:29:59.702184 1137611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:29:59.702211 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:29:59.703512 1137611 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 00:29:59.703539 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1217 00:29:59.709075 1137611 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	W1217 00:29:59.738428 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:29:59.900890 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:29:59.977059 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.83980806s)
	I1217 00:29:59.977106 1137611 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-219291"
	I1217 00:29:59.980099 1137611 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 00:29:59.983794 1137611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 00:29:59.994154 1137611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:29:59.994180 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:00.195267 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:00.195541 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:00.493131 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:00.665141 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:00.669464 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:00.988610 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:01.165898 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:01.166256 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:01.166779 1137611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 00:30:01.166887 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:30:01.188245 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:30:01.304297 1137611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 00:30:01.324972 1137611 addons.go:239] Setting addon gcp-auth=true in "addons-219291"
	I1217 00:30:01.325141 1137611 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:30:01.325667 1137611 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:30:01.347828 1137611 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 00:30:01.347893 1137611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:30:01.367138 1137611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:30:01.487718 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:01.660690 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:01.661451 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:01.987489 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:02.162461 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:02.162549 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 00:30:02.230245 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:02.487393 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:02.662050 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:02.662088 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:02.987278 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:03.156878 1137611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.25593985s)
	I1217 00:30:03.156942 1137611 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.809087386s)
	I1217 00:30:03.160483 1137611 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:30:03.163138 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:03.163871 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:03.166160 1137611 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 00:30:03.168958 1137611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 00:30:03.168989 1137611 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 00:30:03.182430 1137611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 00:30:03.182493 1137611 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 00:30:03.196669 1137611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:30:03.196694 1137611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 00:30:03.210115 1137611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:30:03.487185 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:03.665961 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:03.667014 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:03.710375 1137611 addons.go:495] Verifying addon gcp-auth=true in "addons-219291"
	I1217 00:30:03.713524 1137611 out.go:179] * Verifying gcp-auth addon...
	I1217 00:30:03.717087 1137611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 00:30:03.721386 1137611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 00:30:03.721451 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:03.987554 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:04.161910 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:04.162054 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:04.220978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:04.230756 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:04.486994 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:04.661242 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:04.662487 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:04.720167 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:04.987701 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:05.162346 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:05.163449 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:05.220507 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:05.486799 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:05.661051 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:05.661827 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:05.721801 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:05.987185 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:06.163079 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:06.163361 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:06.220126 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:06.231100 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:06.487060 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:06.660800 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:06.661869 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:06.720825 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:06.987739 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:07.161510 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:07.162070 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:07.220056 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:07.487353 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:07.661814 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:07.661924 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:07.720961 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:07.988072 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:08.161090 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:08.162377 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:08.221036 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:08.487440 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:08.661927 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:08.662046 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:08.720955 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:08.730609 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:08.986824 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:09.161977 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:09.162142 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:09.220121 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:09.487457 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:09.661756 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:09.661942 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:09.720681 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:09.987468 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:10.161920 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:10.162037 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:10.221991 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:10.487305 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:10.662025 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:10.662262 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:10.719918 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:10.731812 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:10.987074 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:11.162190 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:11.162859 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:11.220410 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:11.487320 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:11.661948 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:11.662147 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:11.720795 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:11.987589 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:12.161834 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:12.162004 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:12.220938 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:12.487062 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:12.661069 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:12.661615 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:12.720587 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:12.987250 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:13.161393 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:13.162092 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:13.221195 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:13.230912 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:13.487271 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:13.661994 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:13.662147 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:13.720080 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:13.987632 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:14.160873 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:14.162468 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:14.220410 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:14.487566 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:14.662059 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:14.662308 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:14.720003 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:14.987334 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:15.163232 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:15.163353 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:15.220388 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:15.487452 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:15.661601 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:15.661754 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:15.720573 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:15.730212 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:15.987653 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:16.162236 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:16.162351 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:16.221221 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:16.487616 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:16.661865 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:16.662509 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:16.720173 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:16.987645 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:17.160745 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:17.162353 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:17.220286 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:17.487327 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:17.661285 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:17.662001 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:17.720899 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:17.730635 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:17.988129 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:18.162261 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:18.162973 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:18.221558 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:18.487716 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:18.661678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:18.661935 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:18.721006 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:18.986931 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:19.161189 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:19.162115 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:19.219900 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:19.486832 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:19.661972 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:19.662102 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:19.719985 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:19.987818 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:20.160518 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:20.162233 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:20.220511 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:20.230185 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:20.486808 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:20.663316 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:20.665129 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:20.719983 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:20.986688 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:21.162834 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:21.163247 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:21.219848 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:21.487673 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:21.661661 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:21.661859 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:21.720264 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:21.987213 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:22.161163 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:22.162216 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:22.220114 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:22.231081 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:22.487265 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:22.661756 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:22.661892 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:22.720628 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:22.986649 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:23.160595 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:23.161889 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:23.220724 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:23.487759 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:23.662751 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:23.662949 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:23.720696 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:23.988059 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:24.161178 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:24.162413 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:24.220231 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:24.231231 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:24.487442 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:24.662027 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:24.662234 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:24.721299 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:24.987809 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:25.171860 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:25.172550 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:25.223617 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:25.487085 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:25.661060 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:25.662405 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:25.719928 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:25.987327 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:26.162753 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:26.162942 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:26.220659 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:26.487190 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:26.662295 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:26.662804 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:26.720633 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:26.730665 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:26.986978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:27.161414 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:27.162298 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:27.220963 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:27.487799 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:27.660868 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:27.661313 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:27.720147 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:27.986746 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:28.162022 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:28.162622 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:28.220329 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:28.487115 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:28.661152 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:28.663013 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:28.720869 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:28.991632 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:29.162247 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:29.162545 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:29.220195 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:29.231646 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:29.486547 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:29.662028 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:29.662717 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:29.720274 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:29.987560 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:30.161940 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:30.162145 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:30.221500 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:30.486661 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:30.660699 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:30.661936 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:30.720697 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:30.987372 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:31.161985 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:31.162336 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:31.220966 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:31.486999 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:31.661873 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:31.662235 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:31.720119 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:31.731670 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:31.986953 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:32.161150 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:32.162130 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:32.220925 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:32.486929 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:32.661168 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:32.661895 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:32.720647 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:32.986904 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:33.161333 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:33.162214 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:33.220075 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:33.487062 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:33.661813 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:33.661885 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:33.721004 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:33.987054 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:34.160856 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:34.163176 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:34.220812 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1217 00:30:34.230453 1137611 node_ready.go:57] node "addons-219291" has "Ready":"False" status (will retry)
	I1217 00:30:34.487478 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:34.661987 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:34.662365 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:34.720092 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:35.010678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:35.165318 1137611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:30:35.165345 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:35.168994 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:35.234348 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:35.245841 1137611 node_ready.go:49] node "addons-219291" is "Ready"
	I1217 00:30:35.245877 1137611 node_ready.go:38] duration metric: took 39.51860602s for node "addons-219291" to be "Ready" ...
	I1217 00:30:35.245893 1137611 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:30:35.245953 1137611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:30:35.276291 1137611 api_server.go:72] duration metric: took 42.07423472s to wait for apiserver process to appear ...
	I1217 00:30:35.276317 1137611 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:30:35.276336 1137611 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 00:30:35.288943 1137611 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 00:30:35.290226 1137611 api_server.go:141] control plane version: v1.34.2
	I1217 00:30:35.290263 1137611 api_server.go:131] duration metric: took 13.939082ms to wait for apiserver health ...
	I1217 00:30:35.290274 1137611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:30:35.308577 1137611 system_pods.go:59] 19 kube-system pods found
	I1217 00:30:35.308614 1137611 system_pods.go:61] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:35.308622 1137611 system_pods.go:61] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending
	I1217 00:30:35.308637 1137611 system_pods.go:61] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending
	I1217 00:30:35.308641 1137611 system_pods.go:61] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending
	I1217 00:30:35.308645 1137611 system_pods.go:61] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:35.308655 1137611 system_pods.go:61] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:35.308659 1137611 system_pods.go:61] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:35.308669 1137611 system_pods.go:61] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:35.308674 1137611 system_pods.go:61] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending
	I1217 00:30:35.308677 1137611 system_pods.go:61] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:35.308681 1137611 system_pods.go:61] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:35.308689 1137611 system_pods.go:61] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:35.308705 1137611 system_pods.go:61] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending
	I1217 00:30:35.308718 1137611 system_pods.go:61] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:35.308724 1137611 system_pods.go:61] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:35.308733 1137611 system_pods.go:61] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:35.308739 1137611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.308746 1137611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending
	I1217 00:30:35.308752 1137611 system_pods.go:61] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:35.308762 1137611 system_pods.go:74] duration metric: took 18.481891ms to wait for pod list to return data ...
	I1217 00:30:35.308771 1137611 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:30:35.335760 1137611 default_sa.go:45] found service account: "default"
	I1217 00:30:35.335790 1137611 default_sa.go:55] duration metric: took 27.001163ms for default service account to be created ...
	I1217 00:30:35.335801 1137611 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:30:35.462372 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:35.462419 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:35.462426 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending
	I1217 00:30:35.462432 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending
	I1217 00:30:35.462437 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending
	I1217 00:30:35.462440 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:35.462445 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:35.462450 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:35.462463 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:35.462472 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending
	I1217 00:30:35.462476 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:35.462482 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:35.462494 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:35.462499 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending
	I1217 00:30:35.462512 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:35.462518 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:35.462529 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:35.462544 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.462551 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending
	I1217 00:30:35.462557 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:35.462575 1137611 retry.go:31] will retry after 233.455932ms: missing components: kube-dns
	I1217 00:30:35.508255 1137611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:30:35.508282 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:35.668256 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:35.668356 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:35.705307 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:35.705354 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:35.705362 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending
	I1217 00:30:35.705368 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending
	I1217 00:30:35.705375 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:30:35.705381 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:35.705388 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:35.705392 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:35.705397 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:35.705410 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:30:35.705426 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:35.705436 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:35.705442 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:35.705451 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:30:35.705461 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:35.705467 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:35.705473 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:35.705481 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.705490 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:35.705507 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:35.705524 1137611 retry.go:31] will retry after 378.643204ms: missing components: kube-dns
	I1217 00:30:35.732108 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:35.989479 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:36.099704 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:36.099743 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:30:36.099760 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:30:36.099772 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:30:36.099782 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:30:36.099787 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:36.099801 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:36.099805 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:36.099810 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:36.099822 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:30:36.099826 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:36.099838 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:36.099848 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:36.099857 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:30:36.099865 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:36.099872 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:36.099878 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:36.099886 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.099898 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.099904 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 00:30:36.099933 1137611 retry.go:31] will retry after 342.296446ms: missing components: kube-dns
	I1217 00:30:36.186599 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:36.186995 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:36.237314 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:36.448136 1137611 system_pods.go:86] 19 kube-system pods found
	I1217 00:30:36.448172 1137611 system_pods.go:89] "coredns-66bc5c9577-2l8cm" [8289a5c8-109a-40fd-a90d-7cda0ffec8b8] Running
	I1217 00:30:36.448184 1137611 system_pods.go:89] "csi-hostpath-attacher-0" [7e091654-aa14-4fc3-9226-70424d3b5152] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:30:36.448193 1137611 system_pods.go:89] "csi-hostpath-resizer-0" [9445f9ce-4ac7-4259-b10e-94dec75ab0cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:30:36.448200 1137611 system_pods.go:89] "csi-hostpathplugin-btcsg" [85a0b9eb-4fcd-42bb-af3a-aa352832d751] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:30:36.448205 1137611 system_pods.go:89] "etcd-addons-219291" [ab24c3b9-073e-4bb3-8f68-276a459c81af] Running
	I1217 00:30:36.448214 1137611 system_pods.go:89] "kindnet-6tjsd" [c8de44b5-1231-4848-a476-4733cd4140fe] Running
	I1217 00:30:36.448219 1137611 system_pods.go:89] "kube-apiserver-addons-219291" [49f740be-dc55-4ad7-9a56-49e3257b4b55] Running
	I1217 00:30:36.448230 1137611 system_pods.go:89] "kube-controller-manager-addons-219291" [3653afde-1594-4637-aaf8-a317a0e3ce20] Running
	I1217 00:30:36.448238 1137611 system_pods.go:89] "kube-ingress-dns-minikube" [3e556435-a9fc-4d2c-aa11-06fe9c24f8c5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:30:36.448262 1137611 system_pods.go:89] "kube-proxy-2c69d" [0c6b7c55-0830-4542-aa54-2ac2a5258c91] Running
	I1217 00:30:36.448267 1137611 system_pods.go:89] "kube-scheduler-addons-219291" [b2490e4c-4396-46c4-bf78-dcf167398c68] Running
	I1217 00:30:36.448275 1137611 system_pods.go:89] "metrics-server-85b7d694d7-h9vmz" [1b28ebf4-935d-4189-b79a-bdf2d1a0eac6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:30:36.448287 1137611 system_pods.go:89] "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:30:36.448295 1137611 system_pods.go:89] "registry-6b586f9694-zh49c" [7f928017-4e5e-4abe-a73e-0ebae7deb934] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:30:36.448301 1137611 system_pods.go:89] "registry-creds-764b6fb674-h6f8z" [ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:30:36.448306 1137611 system_pods.go:89] "registry-proxy-f4nhl" [2d9cb644-967d-44a2-ad9b-d35dc650db69] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:30:36.448325 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dbmhn" [07fffac1-bb73-42db-9bd8-7d1e54cda42b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.448337 1137611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwhl5" [cb3f8c01-448d-4e81-8461-024d8ae79779] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:30:36.448344 1137611 system_pods.go:89] "storage-provisioner" [28618e7e-a5e7-43e6-8013-c9065904d6aa] Running
	I1217 00:30:36.448352 1137611 system_pods.go:126] duration metric: took 1.112545517s to wait for k8s-apps to be running ...
	I1217 00:30:36.448360 1137611 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:30:36.448455 1137611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:30:36.463092 1137611 system_svc.go:56] duration metric: took 14.72198ms WaitForService to wait for kubelet
	I1217 00:30:36.463131 1137611 kubeadm.go:587] duration metric: took 43.261094432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:30:36.463148 1137611 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:30:36.466629 1137611 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 00:30:36.466662 1137611 node_conditions.go:123] node cpu capacity is 2
	I1217 00:30:36.466682 1137611 node_conditions.go:105] duration metric: took 3.527809ms to run NodePressure ...
	I1217 00:30:36.466711 1137611 start.go:242] waiting for startup goroutines ...
	I1217 00:30:36.488349 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:36.662311 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:36.662939 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:36.721137 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:36.988457 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:37.162101 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:37.163412 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:37.220244 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:37.487731 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:37.662601 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:37.662839 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:37.721054 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:37.988556 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:38.164298 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:38.164840 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:38.263238 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:38.487807 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:38.670496 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:38.670788 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:38.724862 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:38.987966 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:39.165219 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:39.165388 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:39.220651 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:39.492399 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:39.660676 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:39.663042 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:39.719953 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:39.988066 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:40.164072 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:40.165226 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:40.220184 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:40.488052 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:40.662158 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:40.662542 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:40.720204 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:40.987398 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:41.161947 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:41.162166 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:41.219841 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:41.487056 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:41.663908 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:41.665586 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:41.720875 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:41.987987 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:42.164567 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:42.164951 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:42.221373 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:42.487832 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:42.661445 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:42.662894 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:42.720730 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:42.991861 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:43.160959 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:43.163376 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:43.220036 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:43.487290 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:43.663503 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:43.663833 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:43.721125 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:43.987316 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:44.162418 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:44.164369 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:44.220945 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:44.492892 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:44.661431 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:44.662709 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:44.720657 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:44.986982 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:45.190019 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:45.191046 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:45.233938 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:45.489320 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:45.662717 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:45.663002 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:45.721052 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:45.987289 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:46.161937 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:46.162129 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:46.221130 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:46.488203 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:46.661638 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:46.662253 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:46.720229 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:46.987702 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:47.162064 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:47.162174 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:47.220396 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:47.488926 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:47.663577 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:47.663958 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:47.721089 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:47.987912 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:48.162289 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:48.162404 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:48.220815 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:48.487814 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:48.662590 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:48.662757 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:48.721325 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:48.988160 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:49.164972 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:49.165495 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:49.220905 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:49.487882 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:49.663978 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:49.664630 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:49.720852 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:49.987535 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:50.164267 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:50.164665 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:50.221200 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:50.487634 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:50.663759 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:50.664255 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:50.719964 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:50.988536 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:51.162864 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:51.163424 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:51.220696 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:51.487170 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:51.663898 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:51.664080 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:51.719918 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:51.987484 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:52.162759 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:52.163306 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:52.220368 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:52.488168 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:52.660875 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:52.662894 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:52.721040 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:52.987840 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:53.164304 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:53.164785 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:53.221047 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:53.493030 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:53.663204 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:53.663366 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:53.720051 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:53.987572 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:54.164082 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:54.164570 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:54.236253 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:54.488056 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:54.662082 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:54.663596 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:54.720445 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:54.987769 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:55.161668 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:55.164787 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:55.220961 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:55.488169 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:55.662349 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:55.664477 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:55.720412 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:55.991901 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:56.163663 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:56.165083 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:56.221385 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:56.489083 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:56.663610 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:56.663989 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:56.722950 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:56.989478 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:57.163678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:57.163891 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:57.220634 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:57.488676 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:57.663590 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:57.673119 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:57.720533 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:57.988259 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:58.164203 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:58.164632 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:58.220918 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:58.488653 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:58.661005 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:58.663693 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:58.722387 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:58.988819 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:59.164787 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:59.164890 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:59.222749 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:59.492007 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:30:59.663959 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:30:59.664085 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:30:59.721938 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:30:59.987295 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:00.161878 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:00.164949 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:00.223410 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:00.492239 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:00.663783 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:00.664559 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:00.728411 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:00.987710 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:01.163629 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:01.163774 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:01.221319 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:01.493808 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:01.671514 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:01.673698 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:01.723900 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:01.988053 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:02.164058 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:02.164338 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:02.221088 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:02.487382 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:02.663057 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:02.663710 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:02.720884 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:02.987383 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:03.164037 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:03.164271 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:03.220474 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:03.487456 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:03.662629 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:03.662832 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:03.720978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:03.987598 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:04.161827 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:04.162000 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:04.220927 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:04.487915 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:04.664400 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:04.664791 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:04.719652 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:04.987902 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:05.162443 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:05.164026 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:05.219975 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:05.487011 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:05.661318 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:05.662830 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:05.721553 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:05.987416 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:06.160747 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:06.162877 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:06.221096 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:06.487916 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:06.664241 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:06.665880 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:06.721419 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:06.988555 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:07.162422 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:07.162591 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:07.221049 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:07.488411 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:07.661722 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:07.662542 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:07.720583 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:07.987761 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:08.162245 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:08.163236 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:08.220907 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:08.488535 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:08.663746 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:08.663947 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:08.720978 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:08.987469 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:09.162541 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:09.165707 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:09.221276 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:09.487856 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:09.661791 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:09.663387 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:09.722022 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:09.988284 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:10.164184 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:10.165461 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:10.221334 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:10.491437 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:10.662015 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:10.662109 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:10.720198 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:10.987984 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:11.161237 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:31:11.164802 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:11.220742 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:11.489300 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:11.662077 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:11.662504 1137611 kapi.go:107] duration metric: took 1m12.004720954s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 00:31:11.720952 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:11.987512 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:12.162191 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:12.220111 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:12.487884 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:12.664785 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:12.764230 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:12.987391 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:13.164963 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:13.221054 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:13.494317 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:13.663010 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:13.721294 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:13.987973 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:14.162587 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:14.220527 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:14.488168 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:14.663286 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:14.720174 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:14.995520 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:15.161884 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:15.220466 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:15.491566 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:15.662305 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:15.720212 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:15.988737 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:16.162509 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:16.220528 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:16.495678 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:16.664647 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:16.720834 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:16.988659 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:17.163544 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:17.220654 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:17.487734 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:17.662610 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:17.720525 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:17.987291 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:18.162754 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:18.221167 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:18.488528 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:18.663689 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:18.720816 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:18.989947 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:19.162120 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:19.220477 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:19.488300 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:19.663046 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:19.720244 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:19.988343 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:20.164393 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:20.221309 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:20.488619 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:20.661890 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:20.721844 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:20.995222 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:21.167173 1137611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:31:21.221128 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:21.488003 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:21.663217 1137611 kapi.go:107] duration metric: took 1m22.004504854s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 00:31:21.720761 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:21.988089 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:22.222687 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:22.490484 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:22.720543 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:22.988057 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:23.220125 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:23.488409 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:23.720114 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:23.987645 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:24.220733 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:24.487952 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:24.721079 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:24.989417 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:25.221655 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:25.487543 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:25.722746 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:25.989773 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:26.221412 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:26.489186 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:26.720919 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:26.988298 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:27.221133 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:27.489148 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:27.721439 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:27.990032 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:28.220601 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:28.488694 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:28.720861 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:28.987159 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:29.219957 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:29.487908 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:29.720967 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:29.987633 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:30.221460 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:30.487728 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:30.722057 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:30.987737 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:31.221247 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:31.490959 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:31:31.721752 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:31.986828 1137611 kapi.go:107] duration metric: took 1m32.003033713s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 00:31:32.221367 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:32.720645 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:33.220035 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:33.721158 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:34.220456 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:34.721116 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:35.220455 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:35.720498 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:36.220995 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:36.720385 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:37.221547 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:37.720311 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:38.222018 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:38.720591 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:39.220245 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:39.720592 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:40.221094 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:40.720865 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:41.220280 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:41.720893 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:42.249469 1137611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:31:42.723837 1137611 kapi.go:107] duration metric: took 1m39.006753771s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 00:31:42.726825 1137611 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-219291 cluster.
	I1217 00:31:42.731327 1137611 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 00:31:42.734157 1137611 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 00:31:42.737330 1137611 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, storage-provisioner, registry-creds, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1217 00:31:42.740178 1137611 addons.go:530] duration metric: took 1m49.537746606s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner storage-provisioner registry-creds inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1217 00:31:42.740222 1137611 start.go:247] waiting for cluster config update ...
	I1217 00:31:42.740245 1137611 start.go:256] writing updated cluster config ...
	I1217 00:31:42.740576 1137611 ssh_runner.go:195] Run: rm -f paused
	I1217 00:31:42.745168 1137611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:31:42.822108 1137611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2l8cm" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.827179 1137611 pod_ready.go:94] pod "coredns-66bc5c9577-2l8cm" is "Ready"
	I1217 00:31:42.827207 1137611 pod_ready.go:86] duration metric: took 5.067736ms for pod "coredns-66bc5c9577-2l8cm" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.829418 1137611 pod_ready.go:83] waiting for pod "etcd-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.833653 1137611 pod_ready.go:94] pod "etcd-addons-219291" is "Ready"
	I1217 00:31:42.833678 1137611 pod_ready.go:86] duration metric: took 4.235444ms for pod "etcd-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.835880 1137611 pod_ready.go:83] waiting for pod "kube-apiserver-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.840725 1137611 pod_ready.go:94] pod "kube-apiserver-addons-219291" is "Ready"
	I1217 00:31:42.840753 1137611 pod_ready.go:86] duration metric: took 4.848287ms for pod "kube-apiserver-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:42.843265 1137611 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.149178 1137611 pod_ready.go:94] pod "kube-controller-manager-addons-219291" is "Ready"
	I1217 00:31:43.149210 1137611 pod_ready.go:86] duration metric: took 305.915896ms for pod "kube-controller-manager-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.349803 1137611 pod_ready.go:83] waiting for pod "kube-proxy-2c69d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.750569 1137611 pod_ready.go:94] pod "kube-proxy-2c69d" is "Ready"
	I1217 00:31:43.750597 1137611 pod_ready.go:86] duration metric: took 400.767024ms for pod "kube-proxy-2c69d" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:43.957001 1137611 pod_ready.go:83] waiting for pod "kube-scheduler-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:44.349688 1137611 pod_ready.go:94] pod "kube-scheduler-addons-219291" is "Ready"
	I1217 00:31:44.349718 1137611 pod_ready.go:86] duration metric: took 392.691672ms for pod "kube-scheduler-addons-219291" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:31:44.349733 1137611 pod_ready.go:40] duration metric: took 1.604533685s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:31:44.406250 1137611 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1217 00:31:44.410035 1137611 out.go:179] * Done! kubectl is now configured to use "addons-219291" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 00:31:45 addons-219291 crio[829]: time="2025-12-17T00:31:45.581930363Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.616127798Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=86a3da83-858e-4f4f-b7a8-96535358a010 name=/runtime.v1.ImageService/PullImage
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.616771105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1e718ce2-5c5a-436d-8365-abbf983c66c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.618540114Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4104acf8-d0ae-4a01-83bb-447fa42ca67e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.627835212Z" level=info msg="Creating container: default/busybox/busybox" id=9978dfe2-8949-42af-b9ff-85efbc78a503 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.627967738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.638099985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.638630131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.657805158Z" level=info msg="Created container 6deb013053c81189249c6c190727b08168ef84d4536e3fbe4548b59c3cadaba2: default/busybox/busybox" id=9978dfe2-8949-42af-b9ff-85efbc78a503 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.660916616Z" level=info msg="Starting container: 6deb013053c81189249c6c190727b08168ef84d4536e3fbe4548b59c3cadaba2" id=f634037e-4a4c-4b62-a0f4-d49cd10a37f0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.66351356Z" level=info msg="Started container" PID=4904 containerID=6deb013053c81189249c6c190727b08168ef84d4536e3fbe4548b59c3cadaba2 description=default/busybox/busybox id=f634037e-4a4c-4b62-a0f4-d49cd10a37f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab53f463a1113a90ff375618559f4228e6222211cee0ec71bd71798e22d7a3e0
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.884949716Z" level=info msg="Removing container: 28114b5b86dc05a7fcc8e3d7c53eac86eae5e77070a5ba67474dd85ec87a3194" id=e954567d-6c48-4d49-8d02-7b9fc66250d2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.887556915Z" level=info msg="Error loading conmon cgroup of container 28114b5b86dc05a7fcc8e3d7c53eac86eae5e77070a5ba67474dd85ec87a3194: cgroup deleted" id=e954567d-6c48-4d49-8d02-7b9fc66250d2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.893053646Z" level=info msg="Removed container 28114b5b86dc05a7fcc8e3d7c53eac86eae5e77070a5ba67474dd85ec87a3194: gcp-auth/gcp-auth-certs-create-bbwpp/create" id=e954567d-6c48-4d49-8d02-7b9fc66250d2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.896239934Z" level=info msg="Removing container: 615179481da3292d362c074e4edc61f89e43f048ab83e7e55d90c00329dcc2bb" id=c9a0d3b4-506a-4a2b-9921-ed883e47c254 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.898573458Z" level=info msg="Error loading conmon cgroup of container 615179481da3292d362c074e4edc61f89e43f048ab83e7e55d90c00329dcc2bb: cgroup deleted" id=c9a0d3b4-506a-4a2b-9921-ed883e47c254 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.905191778Z" level=info msg="Removed container 615179481da3292d362c074e4edc61f89e43f048ab83e7e55d90c00329dcc2bb: gcp-auth/gcp-auth-certs-patch-hcvvm/patch" id=c9a0d3b4-506a-4a2b-9921-ed883e47c254 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.908191682Z" level=info msg="Stopping pod sandbox: c61b82d05bc9cc2e9c1842cfd2d7ccc6b8b0bc35beb95a14eae23311192c0d66" id=19ee5e3c-ccf8-4545-a6b0-c175b64c867d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.908269793Z" level=info msg="Stopped pod sandbox (already stopped): c61b82d05bc9cc2e9c1842cfd2d7ccc6b8b0bc35beb95a14eae23311192c0d66" id=19ee5e3c-ccf8-4545-a6b0-c175b64c867d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.909346494Z" level=info msg="Removing pod sandbox: c61b82d05bc9cc2e9c1842cfd2d7ccc6b8b0bc35beb95a14eae23311192c0d66" id=d7964334-c3b4-4138-8801-aa825d44fc4e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.914929533Z" level=info msg="Removed pod sandbox: c61b82d05bc9cc2e9c1842cfd2d7ccc6b8b0bc35beb95a14eae23311192c0d66" id=d7964334-c3b4-4138-8801-aa825d44fc4e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.915740689Z" level=info msg="Stopping pod sandbox: 7396134cb288ddc426ffb9c6d8e3f6c45b74d0ee03847e5e6a03485af3c01ea8" id=2703a591-788f-4b99-a8bb-2847110c1164 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.915801733Z" level=info msg="Stopped pod sandbox (already stopped): 7396134cb288ddc426ffb9c6d8e3f6c45b74d0ee03847e5e6a03485af3c01ea8" id=2703a591-788f-4b99-a8bb-2847110c1164 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.916161142Z" level=info msg="Removing pod sandbox: 7396134cb288ddc426ffb9c6d8e3f6c45b74d0ee03847e5e6a03485af3c01ea8" id=75fd7227-a1c1-4a7b-bcff-115275283c85 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 17 00:31:47 addons-219291 crio[829]: time="2025-12-17T00:31:47.921475707Z" level=info msg="Removed pod sandbox: 7396134cb288ddc426ffb9c6d8e3f6c45b74d0ee03847e5e6a03485af3c01ea8" id=75fd7227-a1c1-4a7b-bcff-115275283c85 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	6deb013053c81       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   ab53f463a1113       busybox                                     default
	bf4c29dbf6234       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 14 seconds ago       Running             gcp-auth                                 0                   ee1386eb9013f       gcp-auth-78565c9fb4-8zdw7                   gcp-auth
	193890a73e001       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          25 seconds ago       Running             csi-snapshotter                          0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	1bbd4a6a667e4       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          26 seconds ago       Running             csi-provisioner                          0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	4c9431192a983       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            27 seconds ago       Running             liveness-probe                           0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	fa0b6e31d74dd       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           29 seconds ago       Running             hostpath                                 0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	8d6e374670dcd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                30 seconds ago       Running             node-driver-registrar                    0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	cad1a09616cb3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            32 seconds ago       Running             gadget                                   0                   9db5e8859117a       gadget-ml2x5                                gadget
	c8aaf29c36d11       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             35 seconds ago       Running             controller                               0                   a46bbee15e4c1       ingress-nginx-controller-85d4c799dd-rf2z8   ingress-nginx
	29ad784e8ed80       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              42 seconds ago       Running             csi-resizer                              0                   69a790bf7d75f       csi-hostpath-resizer-0                      kube-system
	a051b23901572       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   44 seconds ago       Running             csi-external-health-monitor-controller   0                   160becaf6bacb       csi-hostpathplugin-btcsg                    kube-system
	d8e39af946260       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      45 seconds ago       Running             volume-snapshot-controller               0                   c88d024cce8b9       snapshot-controller-7d9fbc56b8-dbmhn        kube-system
	6a9e26980319f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              46 seconds ago       Running             registry-proxy                           0                   eb7f527a31979       registry-proxy-f4nhl                        kube-system
	b3b584a64d334       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      49 seconds ago       Running             volume-snapshot-controller               0                   8197affe2e0ea       snapshot-controller-7d9fbc56b8-gwhl5        kube-system
	c0e9ccefa063f       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             50 seconds ago       Running             csi-attacher                             0                   445dc29978230       csi-hostpath-attacher-0                     kube-system
	570b13b93f9ef       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              52 seconds ago       Running             yakd                                     0                   8c0c0c23548c3       yakd-dashboard-5ff678cb9-lmk68              yakd-dashboard
	137e2ee1d0566       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             56 seconds ago       Running             local-path-provisioner                   0                   70b723ad34385       local-path-provisioner-648f6765c9-49qhs     local-path-storage
	9733ba6e686c6       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               57 seconds ago       Running             minikube-ingress-dns                     0                   89f504f071526       kube-ingress-dns-minikube                   kube-system
	157b30139cf36       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    0                   b1a122fa78517       ingress-nginx-admission-patch-fwl2h         ingress-nginx
	fa9db7f2a2a56       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   cb59264e22ea1       cloud-spanner-emulator-5bdddb765-qvdx4      default
	6fbc1aa1c1165       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   d757747ad8d60       registry-6b586f9694-zh49c                   kube-system
	6be3d66db02da       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   30eaadd845edc       nvidia-device-plugin-daemonset-86n5b        kube-system
	f9b83d4bb59ac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   fdd85f8102eeb       ingress-nginx-admission-create-qw5z7        ingress-nginx
	7e472f122d8fb       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   d8796618f8955       metrics-server-85b7d694d7-h9vmz             kube-system
	ee62b48d5f8a8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   621b8fe0d93b6       storage-provisioner                         kube-system
	fa923421199e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   2c773b690730d       coredns-66bc5c9577-2l8cm                    kube-system
	d80c862e4d310       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   a9b03219e5a17       kindnet-6tjsd                               kube-system
	6111e6b00517f       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   82a3e3b9ed862       kube-proxy-2c69d                            kube-system
	a43c51ac35173       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   a1060265c367f       kube-apiserver-addons-219291                kube-system
	641fd3059b8b5       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   45d40c0fc6561       kube-scheduler-addons-219291                kube-system
	d981f5abaaa97       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   5cf74d0f7ba7c       kube-controller-manager-addons-219291       kube-system
	d607d9f1296a5       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   ccbd3741ff720       etcd-addons-219291                          kube-system
	
	
	==> coredns [fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4] <==
	[INFO] 10.244.0.12:59793 - 27172 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000221525s
	[INFO] 10.244.0.12:59793 - 60964 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002137442s
	[INFO] 10.244.0.12:59793 - 1673 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001831997s
	[INFO] 10.244.0.12:59793 - 14456 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00010627s
	[INFO] 10.244.0.12:59793 - 7620 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000098361s
	[INFO] 10.244.0.12:41402 - 30993 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137859s
	[INFO] 10.244.0.12:41402 - 31470 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084995s
	[INFO] 10.244.0.12:46705 - 14255 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085963s
	[INFO] 10.244.0.12:46705 - 14067 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081836s
	[INFO] 10.244.0.12:53037 - 58130 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109708s
	[INFO] 10.244.0.12:53037 - 58600 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00032206s
	[INFO] 10.244.0.12:46852 - 2939 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001405029s
	[INFO] 10.244.0.12:46852 - 2759 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001778633s
	[INFO] 10.244.0.12:42786 - 40771 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000130622s
	[INFO] 10.244.0.12:42786 - 40607 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175306s
	[INFO] 10.244.0.21:39163 - 56174 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00018439s
	[INFO] 10.244.0.21:38038 - 16555 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000073089s
	[INFO] 10.244.0.21:49406 - 57259 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011907s
	[INFO] 10.244.0.21:35381 - 49278 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000070899s
	[INFO] 10.244.0.21:47165 - 43381 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007624s
	[INFO] 10.244.0.21:38303 - 6924 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064056s
	[INFO] 10.244.0.21:42406 - 47839 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002264668s
	[INFO] 10.244.0.21:45961 - 51427 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002581075s
	[INFO] 10.244.0.21:44627 - 25271 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002393937s
	[INFO] 10.244.0.21:48002 - 57596 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002023869s
	
	
	==> describe nodes <==
	Name:               addons-219291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-219291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=addons-219291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_29_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-219291
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-219291"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-219291
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:31:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:31:30 +0000   Wed, 17 Dec 2025 00:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:31:30 +0000   Wed, 17 Dec 2025 00:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:31:30 +0000   Wed, 17 Dec 2025 00:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:31:30 +0000   Wed, 17 Dec 2025 00:30:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-219291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                0d8ad118-bc04-4408-bc07-b66d8fba29fb
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-qvdx4       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  gadget                      gadget-ml2x5                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-8zdw7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-rf2z8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         117s
	  kube-system                 coredns-66bc5c9577-2l8cm                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-btcsg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 etcd-addons-219291                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-6tjsd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-addons-219291                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-219291        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-2c69d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-addons-219291                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 metrics-server-85b7d694d7-h9vmz              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-86n5b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 registry-6b586f9694-zh49c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-creds-764b6fb674-h6f8z              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 registry-proxy-f4nhl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 snapshot-controller-7d9fbc56b8-dbmhn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 snapshot-controller-7d9fbc56b8-gwhl5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-49qhs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lmk68               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m2s                   kube-proxy       
	  Warning  CgroupV1                 2m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node addons-219291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node addons-219291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node addons-219291 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s                   kubelet          Node addons-219291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s                   kubelet          Node addons-219291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s                   kubelet          Node addons-219291 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m4s                   node-controller  Node addons-219291 event: Registered Node addons-219291 in Controller
	  Normal   NodeReady                82s                    kubelet          Node addons-219291 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 23:34] overlayfs: idmapped layers are currently not supported
	[Dec16 23:35] overlayfs: idmapped layers are currently not supported
	[Dec16 23:37] overlayfs: idmapped layers are currently not supported
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e] <==
	{"level":"warn","ts":"2025-12-17T00:29:43.954327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:43.973803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:43.979309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.003334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.041156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.067786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.106533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.135885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.164615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.194037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.209289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.273200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.295070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.319677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.341282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.369402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.385290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.404760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:29:44.496473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:00.744986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:00.769522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.278671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.300764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.328909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:30:22.345281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [bf4c29dbf62345eb6caaea8b0d74345f55043a98dd8d809c1936c697d1d5f891] <==
	2025/12/17 00:31:42 GCP Auth Webhook started!
	2025/12/17 00:31:44 Ready to marshal response ...
	2025/12/17 00:31:44 Ready to write response ...
	2025/12/17 00:31:45 Ready to marshal response ...
	2025/12/17 00:31:45 Ready to write response ...
	2025/12/17 00:31:45 Ready to marshal response ...
	2025/12/17 00:31:45 Ready to write response ...
	
	
	==> kernel <==
	 00:31:56 up  6:14,  0 user,  load average: 1.96, 2.04, 1.81
	Linux addons-219291 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3] <==
	E1217 00:30:24.436378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1217 00:30:24.436379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1217 00:30:24.436655       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1217 00:30:24.436747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1217 00:30:25.937189       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 00:30:25.937312       1 metrics.go:72] Registering metrics
	I1217 00:30:25.937416       1 controller.go:711] "Syncing nftables rules"
	I1217 00:30:34.443084       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:30:34.443135       1 main.go:301] handling current node
	I1217 00:30:44.435304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:30:44.435337       1 main.go:301] handling current node
	I1217 00:30:54.435828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:30:54.435858       1 main.go:301] handling current node
	I1217 00:31:04.435532       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:31:04.435566       1 main.go:301] handling current node
	I1217 00:31:14.435398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:31:14.435436       1 main.go:301] handling current node
	I1217 00:31:24.436542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:31:24.436570       1 main.go:301] handling current node
	I1217 00:31:34.436383       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:31:34.436446       1 main.go:301] handling current node
	I1217 00:31:44.436607       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:31:44.436660       1 main.go:301] handling current node
	I1217 00:31:54.440688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 00:31:54.440780       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4] <==
	W1217 00:30:00.742493       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 00:30:00.765240       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1217 00:30:03.580515       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.49.120"}
	W1217 00:30:22.278413       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 00:30:22.298434       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 00:30:22.324838       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 00:30:22.339823       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 00:30:35.034309       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.49.120:443: connect: connection refused
	E1217 00:30:35.036102       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.49.120:443: connect: connection refused" logger="UnhandledError"
	W1217 00:30:35.037160       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.49.120:443: connect: connection refused
	E1217 00:30:35.037769       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.49.120:443: connect: connection refused" logger="UnhandledError"
	W1217 00:30:35.115075       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.49.120:443: connect: connection refused
	E1217 00:30:35.115721       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.49.120:443: connect: connection refused" logger="UnhandledError"
	E1217 00:30:39.852377       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.181.61:443: connect: connection refused" logger="UnhandledError"
	W1217 00:30:39.852501       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 00:30:39.852556       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 00:30:39.852981       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.181.61:443: connect: connection refused" logger="UnhandledError"
	E1217 00:30:39.859552       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.181.61:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.181.61:443: connect: connection refused" logger="UnhandledError"
	I1217 00:30:39.980233       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 00:31:54.517303       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45186: use of closed network connection
	E1217 00:31:54.765004       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45218: use of closed network connection
	E1217 00:31:54.896659       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45248: use of closed network connection
	
	
	==> kube-controller-manager [d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60] <==
	I1217 00:29:52.284931       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 00:29:52.285101       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 00:29:52.285254       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 00:29:52.286949       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 00:29:52.286998       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 00:29:52.289384       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 00:29:52.292064       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 00:29:52.292456       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:29:52.303775       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 00:29:52.312110       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 00:29:52.329489       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 00:29:52.332985       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:29:52.333007       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 00:29:52.333014       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 00:29:52.338339       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E1217 00:29:58.569614       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1217 00:29:58.588675       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1217 00:30:22.270355       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1217 00:30:22.270627       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 00:30:22.270706       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 00:30:22.304295       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1217 00:30:22.309115       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 00:30:22.371884       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 00:30:22.409666       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:30:37.295774       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2] <==
	I1217 00:29:54.271167       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:29:54.366518       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:29:54.496156       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:29:54.496196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 00:29:54.496276       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:29:54.571953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 00:29:54.572020       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:29:54.578157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:29:54.578484       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:29:54.578500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:29:54.579895       1 config.go:200] "Starting service config controller"
	I1217 00:29:54.579907       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:29:54.579922       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:29:54.579927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:29:54.579936       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:29:54.579940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:29:54.580869       1 config.go:309] "Starting node config controller"
	I1217 00:29:54.580878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:29:54.580886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:29:54.680034       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:29:54.680069       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:29:54.680094       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5] <==
	I1217 00:29:45.354625       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 00:29:45.376202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 00:29:45.394762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:29:45.394929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:29:45.395368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:29:45.395534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:29:45.395694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 00:29:45.395798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:29:45.395967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:29:45.396149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:29:45.396241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:29:45.396328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:29:45.396447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:29:45.396549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:29:45.396662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:29:45.396747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:29:45.396886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:29:45.396932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:29:45.397007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:29:45.398310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:29:46.261078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:29:46.343039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:29:46.362662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:29:46.457872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1217 00:29:48.956929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:31:14 addons-219291 kubelet[1282]: I1217 00:31:14.444505    1282 scope.go:117] "RemoveContainer" containerID="9bbcb4aa42241e5561da5daf8c4c0987ab8125eecde25e4c6e367399768c2545"
	Dec 17 00:31:14 addons-219291 kubelet[1282]: I1217 00:31:14.627873    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4bh9\" (UniqueName: \"kubernetes.io/projected/c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2-kube-api-access-r4bh9\") pod \"c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2\" (UID: \"c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2\") "
	Dec 17 00:31:14 addons-219291 kubelet[1282]: I1217 00:31:14.630190    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2-kube-api-access-r4bh9" (OuterVolumeSpecName: "kube-api-access-r4bh9") pod "c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2" (UID: "c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2"). InnerVolumeSpecName "kube-api-access-r4bh9". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 00:31:14 addons-219291 kubelet[1282]: I1217 00:31:14.729117    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r4bh9\" (UniqueName: \"kubernetes.io/projected/c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2-kube-api-access-r4bh9\") on node \"addons-219291\" DevicePath \"\""
	Dec 17 00:31:15 addons-219291 kubelet[1282]: I1217 00:31:15.493062    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=38.131110773 podStartE2EDuration="1m16.493042396s" podCreationTimestamp="2025-12-17 00:29:59 +0000 UTC" firstStartedPulling="2025-12-17 00:30:36.12669017 +0000 UTC m=+48.385468897" lastFinishedPulling="2025-12-17 00:31:14.488621794 +0000 UTC m=+86.747400520" observedRunningTime="2025-12-17 00:31:15.491913684 +0000 UTC m=+87.750692419" watchObservedRunningTime="2025-12-17 00:31:15.493042396 +0000 UTC m=+87.751821122"
	Dec 17 00:31:15 addons-219291 kubelet[1282]: I1217 00:31:15.505784    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c61b82d05bc9cc2e9c1842cfd2d7ccc6b8b0bc35beb95a14eae23311192c0d66"
	Dec 17 00:31:16 addons-219291 kubelet[1282]: I1217 00:31:16.042229    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2ddg\" (UniqueName: \"kubernetes.io/projected/25ed1e1d-fbd7-4476-815d-a5c4897b8077-kube-api-access-p2ddg\") pod \"25ed1e1d-fbd7-4476-815d-a5c4897b8077\" (UID: \"25ed1e1d-fbd7-4476-815d-a5c4897b8077\") "
	Dec 17 00:31:16 addons-219291 kubelet[1282]: I1217 00:31:16.050447    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ed1e1d-fbd7-4476-815d-a5c4897b8077-kube-api-access-p2ddg" (OuterVolumeSpecName: "kube-api-access-p2ddg") pod "25ed1e1d-fbd7-4476-815d-a5c4897b8077" (UID: "25ed1e1d-fbd7-4476-815d-a5c4897b8077"). InnerVolumeSpecName "kube-api-access-p2ddg". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 00:31:16 addons-219291 kubelet[1282]: I1217 00:31:16.143166    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p2ddg\" (UniqueName: \"kubernetes.io/projected/25ed1e1d-fbd7-4476-815d-a5c4897b8077-kube-api-access-p2ddg\") on node \"addons-219291\" DevicePath \"\""
	Dec 17 00:31:16 addons-219291 kubelet[1282]: I1217 00:31:16.547421    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7396134cb288ddc426ffb9c6d8e3f6c45b74d0ee03847e5e6a03485af3c01ea8"
	Dec 17 00:31:24 addons-219291 kubelet[1282]: I1217 00:31:24.603596    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-rf2z8" podStartSLOduration=49.275047946 podStartE2EDuration="1m25.603574217s" podCreationTimestamp="2025-12-17 00:29:59 +0000 UTC" firstStartedPulling="2025-12-17 00:30:44.425071069 +0000 UTC m=+56.683849804" lastFinishedPulling="2025-12-17 00:31:20.753596987 +0000 UTC m=+93.012376075" observedRunningTime="2025-12-17 00:31:21.594737886 +0000 UTC m=+93.853516613" watchObservedRunningTime="2025-12-17 00:31:24.603574217 +0000 UTC m=+96.862352969"
	Dec 17 00:31:27 addons-219291 kubelet[1282]: I1217 00:31:27.451778    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-ml2x5" podStartSLOduration=67.838315435 podStartE2EDuration="1m29.451760322s" podCreationTimestamp="2025-12-17 00:29:58 +0000 UTC" firstStartedPulling="2025-12-17 00:31:02.784294838 +0000 UTC m=+75.043073565" lastFinishedPulling="2025-12-17 00:31:24.397739726 +0000 UTC m=+96.656518452" observedRunningTime="2025-12-17 00:31:24.605671521 +0000 UTC m=+96.864450256" watchObservedRunningTime="2025-12-17 00:31:27.451760322 +0000 UTC m=+99.710539048"
	Dec 17 00:31:28 addons-219291 kubelet[1282]: I1217 00:31:28.105202    1282 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 17 00:31:28 addons-219291 kubelet[1282]: I1217 00:31:28.109146    1282 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 17 00:31:31 addons-219291 kubelet[1282]: I1217 00:31:31.657748    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-btcsg" podStartSLOduration=1.244141246 podStartE2EDuration="56.657730061s" podCreationTimestamp="2025-12-17 00:30:35 +0000 UTC" firstStartedPulling="2025-12-17 00:30:36.054734253 +0000 UTC m=+48.313512988" lastFinishedPulling="2025-12-17 00:31:31.468323077 +0000 UTC m=+103.727101803" observedRunningTime="2025-12-17 00:31:31.654548795 +0000 UTC m=+103.913327530" watchObservedRunningTime="2025-12-17 00:31:31.657730061 +0000 UTC m=+103.916508796"
	Dec 17 00:31:38 addons-219291 kubelet[1282]: E1217 00:31:38.867530    1282 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 17 00:31:38 addons-219291 kubelet[1282]: E1217 00:31:38.867630    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce-gcr-creds podName:ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce nodeName:}" failed. No retries permitted until 2025-12-17 00:32:42.867611031 +0000 UTC m=+175.126389758 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce-gcr-creds") pod "registry-creds-764b6fb674-h6f8z" (UID: "ad7100cb-0a7c-4ab9-9c96-3ac7e657b4ce") : secret "registry-creds-gcr" not found
	Dec 17 00:31:45 addons-219291 kubelet[1282]: I1217 00:31:45.148356    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-8zdw7" podStartSLOduration=99.080210133 podStartE2EDuration="1m42.148332833s" podCreationTimestamp="2025-12-17 00:30:03 +0000 UTC" firstStartedPulling="2025-12-17 00:31:39.346369112 +0000 UTC m=+111.605147838" lastFinishedPulling="2025-12-17 00:31:42.414491811 +0000 UTC m=+114.673270538" observedRunningTime="2025-12-17 00:31:42.703749453 +0000 UTC m=+114.962528196" watchObservedRunningTime="2025-12-17 00:31:45.148332833 +0000 UTC m=+117.407111568"
	Dec 17 00:31:45 addons-219291 kubelet[1282]: I1217 00:31:45.342916    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/75e14858-acc7-478e-b6f4-1ead1bced578-gcp-creds\") pod \"busybox\" (UID: \"75e14858-acc7-478e-b6f4-1ead1bced578\") " pod="default/busybox"
	Dec 17 00:31:45 addons-219291 kubelet[1282]: I1217 00:31:45.343228    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dq5\" (UniqueName: \"kubernetes.io/projected/75e14858-acc7-478e-b6f4-1ead1bced578-kube-api-access-t6dq5\") pod \"busybox\" (UID: \"75e14858-acc7-478e-b6f4-1ead1bced578\") " pod="default/busybox"
	Dec 17 00:31:45 addons-219291 kubelet[1282]: I1217 00:31:45.883294    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2" path="/var/lib/kubelet/pods/c6003b9e-e5c4-4e96-9f3e-a1e76d3d0de2/volumes"
	Dec 17 00:31:47 addons-219291 kubelet[1282]: I1217 00:31:47.882624    1282 scope.go:117] "RemoveContainer" containerID="28114b5b86dc05a7fcc8e3d7c53eac86eae5e77070a5ba67474dd85ec87a3194"
	Dec 17 00:31:47 addons-219291 kubelet[1282]: I1217 00:31:47.884479    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ed1e1d-fbd7-4476-815d-a5c4897b8077" path="/var/lib/kubelet/pods/25ed1e1d-fbd7-4476-815d-a5c4897b8077/volumes"
	Dec 17 00:31:47 addons-219291 kubelet[1282]: I1217 00:31:47.894974    1282 scope.go:117] "RemoveContainer" containerID="615179481da3292d362c074e4edc61f89e43f048ab83e7e55d90c00329dcc2bb"
	Dec 17 00:31:48 addons-219291 kubelet[1282]: E1217 00:31:48.006490    1282 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/027cc42d81bbb8de8e73b36194623902f8b1e574911269303255a88c86108678/diff" to get inode usage: stat /var/lib/containers/storage/overlay/027cc42d81bbb8de8e73b36194623902f8b1e574911269303255a88c86108678/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58] <==
	W1217 00:31:32.572481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:34.575776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:34.580547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:36.584603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:36.589391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:38.592340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:38.599073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:40.602318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:40.609749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:42.613237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:42.620081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:44.623648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:44.630481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:46.633825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:46.638737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:48.641806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:48.648930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:50.651495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:50.656215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:52.659294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:52.664581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:54.669558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:54.676095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:56.680090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:31:56.688845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-219291 -n addons-219291
helpers_test.go:270: (dbg) Run:  kubectl --context addons-219291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-219291 describe pod ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-219291 describe pod ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z: exit status 1 (88.591699ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qw5z7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fwl2h" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-h6f8z" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-219291 describe pod ingress-nginx-admission-create-qw5z7 ingress-nginx-admission-patch-fwl2h registry-creds-764b6fb674-h6f8z: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable headlamp --alsologtostderr -v=1: exit status 11 (275.061016ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:31:58.071773 1144112 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:31:58.072657 1144112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:31:58.072707 1144112 out.go:374] Setting ErrFile to fd 2...
	I1217 00:31:58.072729 1144112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:31:58.073031 1144112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:31:58.073384 1144112 mustload.go:66] Loading cluster: addons-219291
	I1217 00:31:58.073810 1144112 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:31:58.073847 1144112 addons.go:622] checking whether the cluster is paused
	I1217 00:31:58.073984 1144112 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:31:58.074013 1144112 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:31:58.074569 1144112 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:31:58.095126 1144112 ssh_runner.go:195] Run: systemctl --version
	I1217 00:31:58.095196 1144112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:31:58.114104 1144112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:31:58.211291 1144112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:31:58.211403 1144112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:31:58.243063 1144112 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:31:58.243087 1144112 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:31:58.243092 1144112 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:31:58.243096 1144112 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:31:58.243108 1144112 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:31:58.243112 1144112 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:31:58.243115 1144112 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:31:58.243119 1144112 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:31:58.243122 1144112 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:31:58.243134 1144112 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:31:58.243145 1144112 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:31:58.243157 1144112 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:31:58.243160 1144112 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:31:58.243163 1144112 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:31:58.243166 1144112 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:31:58.243184 1144112 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:31:58.243192 1144112 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:31:58.243201 1144112 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:31:58.243205 1144112 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:31:58.243209 1144112 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:31:58.243213 1144112 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:31:58.243216 1144112 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:31:58.243220 1144112 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:31:58.243226 1144112 cri.go:89] found id: ""
	I1217 00:31:58.243284 1144112 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:31:58.259974 1144112 out.go:203] 
	W1217 00:31:58.263031 1144112 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:31:58.263060 1144112 out.go:285] * 
	* 
	W1217 00:31:58.272852 1144112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:31:58.276035 1144112 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.12s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-qvdx4" [8b0a448e-c573-463a-afc1-21ae6127f34a] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014106808s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (281.590354ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:15.043319 1144579 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:15.044332 1144579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:15.044454 1144579 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:15.044499 1144579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:15.044900 1144579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:32:15.045356 1144579 mustload.go:66] Loading cluster: addons-219291
	I1217 00:32:15.045833 1144579 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:15.045884 1144579 addons.go:622] checking whether the cluster is paused
	I1217 00:32:15.046061 1144579 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:15.046102 1144579 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:32:15.046712 1144579 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:32:15.067140 1144579 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:15.067211 1144579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:32:15.087193 1144579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:32:15.187378 1144579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:32:15.187521 1144579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:32:15.219868 1144579 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:32:15.219894 1144579 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:32:15.219900 1144579 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:32:15.219904 1144579 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:32:15.219908 1144579 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:32:15.219912 1144579 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:32:15.219915 1144579 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:32:15.219918 1144579 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:32:15.219945 1144579 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:32:15.219967 1144579 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:32:15.219972 1144579 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:32:15.219983 1144579 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:32:15.219986 1144579 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:32:15.219989 1144579 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:32:15.219993 1144579 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:32:15.220021 1144579 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:32:15.220032 1144579 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:32:15.220037 1144579 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:32:15.220053 1144579 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:32:15.220067 1144579 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:32:15.220073 1144579 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:32:15.220077 1144579 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:32:15.220083 1144579 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:32:15.220086 1144579 cri.go:89] found id: ""
	I1217 00:32:15.220159 1144579 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:32:15.236774 1144579 out.go:203] 
	W1217 00:32:15.240411 1144579 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:32:15.240481 1144579 out.go:285] * 
	* 
	W1217 00:32:15.248981 1144579 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:32:15.252376 1144579 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-219291 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-219291 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-219291 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [0891b669-de06-4df7-8a57-3c0c97b1df06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [0891b669-de06-4df7-8a57-3c0c97b1df06] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [0891b669-de06-4df7-8a57-3c0c97b1df06] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005223741s
addons_test.go:969: (dbg) Run:  kubectl --context addons-219291 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 ssh "cat /opt/local-path-provisioner/pvc-f193b14d-0c21-4e98-b1c3-e5e46af63ea3_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-219291 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-219291 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (282.738019ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:18.430971 1144733 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:18.432617 1144733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:18.432637 1144733 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:18.432644 1144733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:18.433107 1144733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:32:18.433504 1144733 mustload.go:66] Loading cluster: addons-219291
	I1217 00:32:18.433994 1144733 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:18.434022 1144733 addons.go:622] checking whether the cluster is paused
	I1217 00:32:18.434245 1144733 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:18.434267 1144733 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:32:18.434834 1144733 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:32:18.452528 1144733 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:18.452597 1144733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:32:18.474205 1144733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:32:18.587480 1144733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:32:18.587607 1144733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:32:18.618302 1144733 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:32:18.618326 1144733 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:32:18.618332 1144733 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:32:18.618336 1144733 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:32:18.618339 1144733 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:32:18.618343 1144733 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:32:18.618346 1144733 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:32:18.618349 1144733 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:32:18.618373 1144733 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:32:18.618384 1144733 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:32:18.618388 1144733 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:32:18.618392 1144733 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:32:18.618395 1144733 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:32:18.618399 1144733 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:32:18.618402 1144733 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:32:18.618414 1144733 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:32:18.618423 1144733 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:32:18.618428 1144733 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:32:18.618431 1144733 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:32:18.618434 1144733 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:32:18.618450 1144733 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:32:18.618454 1144733 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:32:18.618458 1144733 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:32:18.618461 1144733 cri.go:89] found id: ""
	I1217 00:32:18.618526 1144733 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:32:18.642111 1144733 out.go:203] 
	W1217 00:32:18.645139 1144733 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:32:18.645175 1144733 out.go:285] * 
	* 
	W1217 00:32:18.653396 1144733 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:32:18.657578 1144733 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-86n5b" [5453531e-fa38-4a92-ae3e-e32fdd20b8b5] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004499952s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (384.869738ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:09.698074 1144399 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:09.699817 1144399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:09.699859 1144399 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:09.699884 1144399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:09.700259 1144399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:32:09.703160 1144399 mustload.go:66] Loading cluster: addons-219291
	I1217 00:32:09.705657 1144399 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:09.705732 1144399 addons.go:622] checking whether the cluster is paused
	I1217 00:32:09.705886 1144399 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:09.705917 1144399 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:32:09.706496 1144399 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:32:09.747349 1144399 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:09.747440 1144399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:32:09.777830 1144399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:32:09.874959 1144399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:32:09.875049 1144399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:32:09.920139 1144399 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:32:09.920163 1144399 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:32:09.920169 1144399 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:32:09.920173 1144399 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:32:09.920177 1144399 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:32:09.920180 1144399 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:32:09.920184 1144399 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:32:09.920187 1144399 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:32:09.920190 1144399 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:32:09.920196 1144399 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:32:09.920200 1144399 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:32:09.920203 1144399 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:32:09.920206 1144399 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:32:09.920212 1144399 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:32:09.920215 1144399 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:32:09.920220 1144399 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:32:09.920228 1144399 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:32:09.920239 1144399 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:32:09.920242 1144399 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:32:09.920245 1144399 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:32:09.920250 1144399 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:32:09.920260 1144399 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:32:09.920264 1144399 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:32:09.920276 1144399 cri.go:89] found id: ""
	I1217 00:32:09.920327 1144399 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:32:09.937064 1144399 out.go:203] 
	W1217 00:32:09.939981 1144399 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:32:09.940005 1144399 out.go:285] * 
	* 
	W1217 00:32:09.948230 1144399 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:32:09.951125 1144399 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-lmk68" [ee36146b-66c9-4479-bc43-45a23701af7b] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004202408s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-219291 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-219291 addons disable yakd --alsologtostderr -v=1: exit status 11 (275.102483ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:04.340367 1144175 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:04.341231 1144175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:04.341275 1144175 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:04.341297 1144175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:04.341589 1144175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:32:04.341952 1144175 mustload.go:66] Loading cluster: addons-219291
	I1217 00:32:04.342383 1144175 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:04.342428 1144175 addons.go:622] checking whether the cluster is paused
	I1217 00:32:04.342592 1144175 config.go:182] Loaded profile config "addons-219291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:32:04.342630 1144175 host.go:66] Checking if "addons-219291" exists ...
	I1217 00:32:04.343182 1144175 cli_runner.go:164] Run: docker container inspect addons-219291 --format={{.State.Status}}
	I1217 00:32:04.362394 1144175 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:04.362462 1144175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-219291
	I1217 00:32:04.385210 1144175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/addons-219291/id_rsa Username:docker}
	I1217 00:32:04.487275 1144175 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:32:04.487369 1144175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:32:04.519454 1144175 cri.go:89] found id: "193890a73e00198dd93d777266cc4564c9239f3fb9996d8fedc39bfcf0bbe99f"
	I1217 00:32:04.519479 1144175 cri.go:89] found id: "1bbd4a6a667e41f91ede1c873a273d8ea8573d6d5ec857fa8cf9eacb6b05442c"
	I1217 00:32:04.519497 1144175 cri.go:89] found id: "4c9431192a983b5d3a468d592ba4cd6bab2dd451283fb537dd0ae3b1b129942e"
	I1217 00:32:04.519501 1144175 cri.go:89] found id: "fa0b6e31d74dd8aaf447207760d15070f9835875a70ece1c323be0e1c1887479"
	I1217 00:32:04.519505 1144175 cri.go:89] found id: "8d6e374670dcde058eed55bd70a5571d21abaa10312b961f23351ad235ffedcc"
	I1217 00:32:04.519509 1144175 cri.go:89] found id: "29ad784e8ed80fe86a42ee05f7444316bf6ae1c18586108c9ac7f6eabfea88af"
	I1217 00:32:04.519513 1144175 cri.go:89] found id: "a051b23901572eb3b645ae29e5258245aaef93241f25ed031db091db521c5b3a"
	I1217 00:32:04.519516 1144175 cri.go:89] found id: "d8e39af94626062853d0ac5be8cb3b794bb5937cdbab84a7cfba86f1ab6b6dcb"
	I1217 00:32:04.519520 1144175 cri.go:89] found id: "6a9e26980319f445b2cfef9bad234e16b81c7ad367c2397020287e5a20b1af72"
	I1217 00:32:04.519526 1144175 cri.go:89] found id: "b3b584a64d33486227dd1befcfad3fc99063799d5512f91f082953bd1ac39d97"
	I1217 00:32:04.519533 1144175 cri.go:89] found id: "c0e9ccefa063f93dd5fa91de156832240e476ed587320797f3b30f4232ba85ef"
	I1217 00:32:04.519536 1144175 cri.go:89] found id: "9733ba6e686c6280ecf2d5b282f35fd4bc036b0d4646d08c29a509cb2af26b70"
	I1217 00:32:04.519542 1144175 cri.go:89] found id: "6fbc1aa1c1165d73fac9c13e16d15a28f36331b7924682f54824821614bbb726"
	I1217 00:32:04.519546 1144175 cri.go:89] found id: "6be3d66db02da194546f2280f08824f82e9129c6cd34ebd8a87ce330db655a31"
	I1217 00:32:04.519551 1144175 cri.go:89] found id: "7e472f122d8fb77912d3f626e9d5a8cbf579397e77f67acb49f118ebef5dbc82"
	I1217 00:32:04.519556 1144175 cri.go:89] found id: "ee62b48d5f8a83530bda9bbacdfd829ba552e810c8bbc52b00b1816b8ab1af58"
	I1217 00:32:04.519563 1144175 cri.go:89] found id: "fa923421199e6feb3d9a2cb218b8a4ee0b3fc1d8ab5ee9a9dbad8775ee551ba4"
	I1217 00:32:04.519567 1144175 cri.go:89] found id: "d80c862e4d31049c7133c7815e9de21a458d622328200634fb02aa580948b0a3"
	I1217 00:32:04.519570 1144175 cri.go:89] found id: "6111e6b00517fa20186c757937a7b6c3e85554946261934129a286323d5596e2"
	I1217 00:32:04.519573 1144175 cri.go:89] found id: "a43c51ac35173d7857c269aae41644fa539eec340b321b4aedb48f6c45a880b4"
	I1217 00:32:04.519577 1144175 cri.go:89] found id: "641fd3059b8b517d0d64ff6b1cc3345a20133f4c3cbba9fc8161a74b329530a5"
	I1217 00:32:04.519580 1144175 cri.go:89] found id: "d981f5abaaa973bb6b0fb30328b14127b4f43b91e9de42aee00c1841d7dfdd60"
	I1217 00:32:04.519583 1144175 cri.go:89] found id: "d607d9f1296a5b5767da9e584c0d9cd424d18ce671f3c22eccf0f242c0c4d16e"
	I1217 00:32:04.519586 1144175 cri.go:89] found id: ""
	I1217 00:32:04.519639 1144175 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 00:32:04.537896 1144175 out.go:203] 
	W1217 00:32:04.542318 1144175 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:32:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 00:32:04.542357 1144175 out.go:285] * 
	* 
	W1217 00:32:04.551660 1144175 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:32:04.556089 1144175 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-219291 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-099267
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image load --daemon kicbase/echo-server:functional-099267 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 image ls: (2.299747313s)
functional_test.go:461: expected "kicbase/echo-server:functional-099267" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1217 00:41:45.358239 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:13.068616 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:07.911810 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:07.918189 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:07.929525 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:07.950901 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:07.992404 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:08.073951 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:08.235590 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:08.557345 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:09.199443 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:10.480859 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:13.042292 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:18.164108 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:28.406343 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:48.887826 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:29.849654 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:45.357683 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:47:51.771033 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.883182915s)

                                                
                                                
-- stdout --
	* [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - HTTP_PROXY=localhost:43139
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:43139 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-389537 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-389537 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000439084s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112257s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112257s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 6 (339.911178ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:49:22.311766 1170481 status.go:458] kubeconfig endpoint: get endpoint: "functional-389537" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-099267 image save kicbase/echo-server:functional-099267 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image rm kicbase/echo-server:functional-099267 --alsologtostderr                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image save --daemon kicbase/echo-server:functional-099267 --alsologtostderr                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/test/nested/copy/1136597/hosts                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/1136597.pem                                                                                                 │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /usr/share/ca-certificates/1136597.pem                                                                                     │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/11365972.pem                                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                                   │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /usr/share/ca-certificates/11365972.pem                                                                                    │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                                   │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                                   │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format short --alsologtostderr                                                                                               │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh pgrep buildkitd                                                                                                                     │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │                     │
	│ image          │ functional-099267 image ls --format yaml --alsologtostderr                                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr                                                    │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format json --alsologtostderr                                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format table --alsologtostderr                                                                                               │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete         │ -p functional-099267                                                                                                                                      │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ start          │ -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:41:01
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:41:01.127682 1164882 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:41:01.127849 1164882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:01.127861 1164882 out.go:374] Setting ErrFile to fd 2...
	I1217 00:41:01.127866 1164882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:01.128276 1164882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:41:01.128894 1164882 out.go:368] Setting JSON to false
	I1217 00:41:01.130258 1164882 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23012,"bootTime":1765909050,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:41:01.130346 1164882 start.go:143] virtualization:  
	I1217 00:41:01.134960 1164882 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:41:01.139801 1164882 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:41:01.139872 1164882 notify.go:221] Checking for updates...
	I1217 00:41:01.146944 1164882 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:41:01.150370 1164882 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:41:01.153676 1164882 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:41:01.156858 1164882 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:41:01.160081 1164882 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:41:01.163631 1164882 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:41:01.195931 1164882 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:41:01.196057 1164882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:01.258907 1164882 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 00:41:01.240974072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:41:01.259002 1164882 docker.go:319] overlay module found
	I1217 00:41:01.262430 1164882 out.go:179] * Using the docker driver based on user configuration
	I1217 00:41:01.265494 1164882 start.go:309] selected driver: docker
	I1217 00:41:01.265504 1164882 start.go:927] validating driver "docker" against <nil>
	I1217 00:41:01.265516 1164882 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:41:01.266257 1164882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:01.321983 1164882 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 00:41:01.312461083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:41:01.322125 1164882 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:41:01.322368 1164882 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:41:01.326032 1164882 out.go:179] * Using Docker driver with root privileges
	I1217 00:41:01.328938 1164882 cni.go:84] Creating CNI manager for ""
	I1217 00:41:01.329003 1164882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:41:01.329011 1164882 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:41:01.329087 1164882 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:01.334085 1164882 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:41:01.336989 1164882 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:41:01.339910 1164882 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:41:01.342819 1164882 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:41:01.342854 1164882 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:41:01.342861 1164882 cache.go:65] Caching tarball of preloaded images
	I1217 00:41:01.342933 1164882 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:41:01.342944 1164882 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:41:01.342953 1164882 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:41:01.343317 1164882 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:41:01.343337 1164882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json: {Name:mk2837b8bee755963283147fbcedd1a62e8fc618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:01.362243 1164882 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:41:01.362255 1164882 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:41:01.362275 1164882 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:41:01.362306 1164882 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:41:01.362418 1164882 start.go:364] duration metric: took 99.026µs to acquireMachinesLock for "functional-389537"
	I1217 00:41:01.362446 1164882 start.go:93] Provisioning new machine with config: &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:41:01.362512 1164882 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:41:01.367861 1164882 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1217 00:41:01.368152 1164882 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:43139 to docker env.
	I1217 00:41:01.368178 1164882 start.go:159] libmachine.API.Create for "functional-389537" (driver="docker")
	I1217 00:41:01.368205 1164882 client.go:173] LocalClient.Create starting
	I1217 00:41:01.368266 1164882 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem
	I1217 00:41:01.368297 1164882 main.go:143] libmachine: Decoding PEM data...
	I1217 00:41:01.368311 1164882 main.go:143] libmachine: Parsing certificate...
	I1217 00:41:01.368362 1164882 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem
	I1217 00:41:01.368377 1164882 main.go:143] libmachine: Decoding PEM data...
	I1217 00:41:01.368391 1164882 main.go:143] libmachine: Parsing certificate...
	I1217 00:41:01.368763 1164882 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:41:01.385602 1164882 cli_runner.go:211] docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:41:01.385689 1164882 network_create.go:284] running [docker network inspect functional-389537] to gather additional debugging logs...
	I1217 00:41:01.385706 1164882 cli_runner.go:164] Run: docker network inspect functional-389537
	W1217 00:41:01.401297 1164882 cli_runner.go:211] docker network inspect functional-389537 returned with exit code 1
	I1217 00:41:01.401318 1164882 network_create.go:287] error running [docker network inspect functional-389537]: docker network inspect functional-389537: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-389537 not found
	I1217 00:41:01.401334 1164882 network_create.go:289] output of [docker network inspect functional-389537]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-389537 not found
	
	** /stderr **
	I1217 00:41:01.401438 1164882 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:41:01.417770 1164882 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001857bb0}
	I1217 00:41:01.417799 1164882 network_create.go:124] attempt to create docker network functional-389537 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 00:41:01.417853 1164882 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-389537 functional-389537
	I1217 00:41:01.475403 1164882 network_create.go:108] docker network functional-389537 192.168.49.0/24 created
	I1217 00:41:01.475433 1164882 kic.go:121] calculated static IP "192.168.49.2" for the "functional-389537" container
	I1217 00:41:01.475519 1164882 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:41:01.490130 1164882 cli_runner.go:164] Run: docker volume create functional-389537 --label name.minikube.sigs.k8s.io=functional-389537 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:41:01.509548 1164882 oci.go:103] Successfully created a docker volume functional-389537
	I1217 00:41:01.509652 1164882 cli_runner.go:164] Run: docker run --rm --name functional-389537-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-389537 --entrypoint /usr/bin/test -v functional-389537:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:41:02.081020 1164882 oci.go:107] Successfully prepared a docker volume functional-389537
	I1217 00:41:02.081079 1164882 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:41:02.081087 1164882 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:41:02.081167 1164882 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-389537:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:41:06.017574 1164882 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-389537:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.936364732s)
	I1217 00:41:06.017598 1164882 kic.go:203] duration metric: took 3.93650694s to extract preloaded images to volume ...
	W1217 00:41:06.017760 1164882 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 00:41:06.017866 1164882 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:41:06.080568 1164882 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-389537 --name functional-389537 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-389537 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-389537 --network functional-389537 --ip 192.168.49.2 --volume functional-389537:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:41:06.418756 1164882 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Running}}
	I1217 00:41:06.440284 1164882 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:41:06.462195 1164882 cli_runner.go:164] Run: docker exec functional-389537 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:41:06.517465 1164882 oci.go:144] the created container "functional-389537" has a running status.
	I1217 00:41:06.517484 1164882 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa...
	I1217 00:41:06.587123 1164882 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:41:06.610215 1164882 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:41:06.631680 1164882 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:41:06.631753 1164882 kic_runner.go:114] Args: [docker exec --privileged functional-389537 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:41:06.679024 1164882 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:41:06.699979 1164882 machine.go:94] provisionDockerMachine start ...
	I1217 00:41:06.700076 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:06.721568 1164882 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:06.722323 1164882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:41:06.722332 1164882 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:41:06.724317 1164882 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 00:41:09.860062 1164882 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:41:09.860075 1164882 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:41:09.860146 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:09.877690 1164882 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:09.878005 1164882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:41:09.878014 1164882 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:41:10.029137 1164882 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:41:10.029240 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:10.050075 1164882 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:10.050401 1164882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:41:10.050415 1164882 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:41:10.185001 1164882 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:10.185017 1164882 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:41:10.185045 1164882 ubuntu.go:190] setting up certificates
	I1217 00:41:10.185054 1164882 provision.go:84] configureAuth start
	I1217 00:41:10.185127 1164882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:41:10.203364 1164882 provision.go:143] copyHostCerts
	I1217 00:41:10.203467 1164882 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:41:10.203477 1164882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:41:10.203558 1164882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:41:10.203735 1164882 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:41:10.203740 1164882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:41:10.203771 1164882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:41:10.203827 1164882 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:41:10.203831 1164882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:41:10.203853 1164882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:41:10.203898 1164882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:41:10.637496 1164882 provision.go:177] copyRemoteCerts
	I1217 00:41:10.637556 1164882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:41:10.637601 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:10.654533 1164882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:41:10.748372 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:41:10.766422 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:41:10.784810 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:41:10.802292 1164882 provision.go:87] duration metric: took 617.218052ms to configureAuth
	I1217 00:41:10.802311 1164882 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:41:10.802492 1164882 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:10.802602 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:10.820343 1164882 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:10.820673 1164882 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:41:10.820686 1164882 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:41:11.124901 1164882 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:41:11.124913 1164882 machine.go:97] duration metric: took 4.424921931s to provisionDockerMachine
	I1217 00:41:11.124923 1164882 client.go:176] duration metric: took 9.756713403s to LocalClient.Create
	I1217 00:41:11.124951 1164882 start.go:167] duration metric: took 9.756759384s to libmachine.API.Create "functional-389537"
	I1217 00:41:11.124958 1164882 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:41:11.124968 1164882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:41:11.125031 1164882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:41:11.125069 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:11.143067 1164882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:41:11.240922 1164882 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:41:11.244309 1164882 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:41:11.244327 1164882 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:41:11.244337 1164882 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:41:11.244392 1164882 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:41:11.244509 1164882 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:41:11.244597 1164882 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:41:11.244641 1164882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:41:11.252218 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:41:11.270332 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:41:11.287628 1164882 start.go:296] duration metric: took 162.641012ms for postStartSetup
	I1217 00:41:11.287982 1164882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:41:11.305345 1164882 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:41:11.305630 1164882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:41:11.305673 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:11.323607 1164882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:41:11.417235 1164882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:41:11.421725 1164882 start.go:128] duration metric: took 10.059197069s to createHost
	I1217 00:41:11.421740 1164882 start.go:83] releasing machines lock for "functional-389537", held for 10.059314949s
	I1217 00:41:11.421818 1164882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:41:11.442947 1164882 out.go:179] * Found network options:
	I1217 00:41:11.445896 1164882 out.go:179]   - HTTP_PROXY=localhost:43139
	W1217 00:41:11.448797 1164882 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1217 00:41:11.451702 1164882 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1217 00:41:11.454557 1164882 ssh_runner.go:195] Run: cat /version.json
	I1217 00:41:11.454596 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:11.454643 1164882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:41:11.454699 1164882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:41:11.478406 1164882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:41:11.479782 1164882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:41:11.661370 1164882 ssh_runner.go:195] Run: systemctl --version
	I1217 00:41:11.667658 1164882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:41:11.704022 1164882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:41:11.708352 1164882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:41:11.708412 1164882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:41:11.735946 1164882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 00:41:11.735960 1164882 start.go:496] detecting cgroup driver to use...
	I1217 00:41:11.736002 1164882 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:11.736049 1164882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:41:11.754541 1164882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:11.767672 1164882 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:41:11.767746 1164882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:41:11.786407 1164882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:41:11.805844 1164882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:41:11.944610 1164882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:41:12.087702 1164882 docker.go:234] disabling docker service ...
	I1217 00:41:12.087762 1164882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:41:12.109797 1164882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:41:12.123803 1164882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:41:12.257300 1164882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:41:12.385364 1164882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:41:12.398072 1164882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:12.411920 1164882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:41:12.411977 1164882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:41:12.420776 1164882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:41:12.420861 1164882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:41:12.430594 1164882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:41:12.439139 1164882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:41:12.448014 1164882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:41:12.456309 1164882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:41:12.465255 1164882 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:41:12.479094 1164882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:41:12.488029 1164882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:41:12.495587 1164882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:41:12.503068 1164882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:12.619386 1164882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:41:12.792251 1164882 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:41:12.792311 1164882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:41:12.796505 1164882 start.go:564] Will wait 60s for crictl version
	I1217 00:41:12.796582 1164882 ssh_runner.go:195] Run: which crictl
	I1217 00:41:12.800507 1164882 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:41:12.825242 1164882 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:41:12.825328 1164882 ssh_runner.go:195] Run: crio --version
	I1217 00:41:12.853678 1164882 ssh_runner.go:195] Run: crio --version
	I1217 00:41:12.883630 1164882 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:41:12.886476 1164882 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:41:12.903434 1164882 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:41:12.907248 1164882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:41:12.917113 1164882 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:41:12.917213 1164882 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:41:12.917262 1164882 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:41:12.948961 1164882 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:41:12.948973 1164882 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:41:12.949027 1164882 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:41:12.980296 1164882 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:41:12.980307 1164882 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:41:12.980314 1164882 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:41:12.980406 1164882 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:41:12.980528 1164882 ssh_runner.go:195] Run: crio config
	I1217 00:41:13.047391 1164882 cni.go:84] Creating CNI manager for ""
	I1217 00:41:13.047404 1164882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:41:13.047439 1164882 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:41:13.047504 1164882 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:41:13.047654 1164882 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:41:13.047738 1164882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:41:13.056106 1164882 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:41:13.056173 1164882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:41:13.064476 1164882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:41:13.078196 1164882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:41:13.092338 1164882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 00:41:13.106275 1164882 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:41:13.110111 1164882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:41:13.120466 1164882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:13.239138 1164882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:41:13.256326 1164882 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:41:13.256337 1164882 certs.go:195] generating shared ca certs ...
	I1217 00:41:13.256351 1164882 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:13.256524 1164882 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:41:13.256569 1164882 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:41:13.256584 1164882 certs.go:257] generating profile certs ...
	I1217 00:41:13.256640 1164882 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:41:13.256659 1164882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt with IP's: []
	I1217 00:41:13.387016 1164882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt ...
	I1217 00:41:13.387033 1164882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: {Name:mk6d666f3ffc040e4d9e7cb1f0deaba4106c0311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:13.387239 1164882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key ...
	I1217 00:41:13.387245 1164882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key: {Name:mkcaaa03b77658e554bc88a031678c7a76d9bebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:13.387336 1164882 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:41:13.387347 1164882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt.05abf8de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 00:41:13.546992 1164882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt.05abf8de ...
	I1217 00:41:13.547006 1164882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt.05abf8de: {Name:mk7b903f3348116e696d361152b1a0f8c6cf812d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:13.547203 1164882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de ...
	I1217 00:41:13.547211 1164882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de: {Name:mk0fa0665022cd0877ea759ab6a0b4bd4c4a140a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:13.547303 1164882 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt.05abf8de -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt
	I1217 00:41:13.547384 1164882 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key
	I1217 00:41:13.547437 1164882 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:41:13.547457 1164882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt with IP's: []
	I1217 00:41:13.702474 1164882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt ...
	I1217 00:41:13.702490 1164882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt: {Name:mk99c380ad7e8acee7335786ef914b75e17b7a21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:13.702687 1164882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key ...
	I1217 00:41:13.702696 1164882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key: {Name:mk0e3de0a86436489270008a3777fd924242cb95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:13.702887 1164882 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:41:13.702929 1164882 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:41:13.702935 1164882 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:41:13.702961 1164882 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:41:13.702983 1164882 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:41:13.703005 1164882 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:41:13.703047 1164882 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:41:13.703669 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:41:13.722573 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:41:13.740765 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:41:13.758781 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:41:13.777802 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:41:13.796484 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:41:13.815564 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:41:13.833957 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:41:13.851705 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:41:13.870008 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:41:13.888173 1164882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:41:13.905881 1164882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:41:13.919235 1164882 ssh_runner.go:195] Run: openssl version
	I1217 00:41:13.925738 1164882 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:41:13.933857 1164882 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:41:13.942048 1164882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:41:13.946005 1164882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:41:13.946063 1164882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:41:13.987428 1164882 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:41:13.994848 1164882 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1136597.pem /etc/ssl/certs/51391683.0
	I1217 00:41:14.005139 1164882 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:41:14.021720 1164882 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:41:14.031356 1164882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:41:14.038679 1164882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:41:14.038738 1164882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:41:14.082802 1164882 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:41:14.090602 1164882 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11365972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:41:14.098627 1164882 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:14.106383 1164882 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:41:14.114599 1164882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:14.118564 1164882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:14.118624 1164882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:14.164726 1164882 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:41:14.172582 1164882 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:41:14.180571 1164882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:41:14.184321 1164882 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:41:14.184363 1164882 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:14.184526 1164882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:41:14.184596 1164882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:41:14.212484 1164882 cri.go:89] found id: ""
	I1217 00:41:14.212544 1164882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:41:14.220377 1164882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:41:14.228550 1164882 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:41:14.228608 1164882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:41:14.236688 1164882 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:41:14.236707 1164882 kubeadm.go:158] found existing configuration files:
	
	I1217 00:41:14.236760 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:41:14.245486 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:41:14.245560 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:41:14.253521 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:41:14.262025 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:41:14.262079 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:41:14.269938 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:41:14.277764 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:41:14.277833 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:41:14.285330 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:41:14.293468 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:41:14.293526 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:41:14.301047 1164882 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:41:14.421259 1164882 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:41:14.421735 1164882 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:41:14.491396 1164882 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:45:18.712927 1164882 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:45:18.712952 1164882 kubeadm.go:319] 
	I1217 00:45:18.713077 1164882 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:45:18.718574 1164882 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:45:18.718627 1164882 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:45:18.718714 1164882 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:45:18.718769 1164882 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:45:18.718803 1164882 kubeadm.go:319] OS: Linux
	I1217 00:45:18.718847 1164882 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:45:18.718894 1164882 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:45:18.718940 1164882 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:45:18.718986 1164882 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:45:18.719033 1164882 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:45:18.719080 1164882 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:45:18.719124 1164882 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:45:18.719171 1164882 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:45:18.719215 1164882 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:45:18.719286 1164882 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:45:18.719384 1164882 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:45:18.719472 1164882 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:45:18.719533 1164882 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:45:18.722506 1164882 out.go:252]   - Generating certificates and keys ...
	I1217 00:45:18.722598 1164882 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:45:18.722701 1164882 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:45:18.722793 1164882 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:45:18.722849 1164882 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:45:18.722909 1164882 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:45:18.722959 1164882 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:45:18.723029 1164882 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:45:18.723183 1164882 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-389537 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:45:18.723236 1164882 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:45:18.723373 1164882 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-389537 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:45:18.723462 1164882 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:45:18.723534 1164882 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:45:18.723577 1164882 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:45:18.723632 1164882 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:45:18.723689 1164882 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:45:18.723745 1164882 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:45:18.723817 1164882 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:45:18.723905 1164882 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:45:18.723969 1164882 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:45:18.724053 1164882 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:45:18.724127 1164882 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:45:18.728892 1164882 out.go:252]   - Booting up control plane ...
	I1217 00:45:18.728995 1164882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:45:18.729079 1164882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:45:18.729170 1164882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:45:18.729285 1164882 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:45:18.729386 1164882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:45:18.729497 1164882 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:45:18.729590 1164882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:45:18.729636 1164882 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:45:18.729787 1164882 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:45:18.729905 1164882 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:45:18.729976 1164882 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000439084s
	I1217 00:45:18.729980 1164882 kubeadm.go:319] 
	I1217 00:45:18.730035 1164882 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:45:18.730065 1164882 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:45:18.730177 1164882 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:45:18.730180 1164882 kubeadm.go:319] 
	I1217 00:45:18.730291 1164882 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:45:18.730323 1164882 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:45:18.730353 1164882 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:45:18.730370 1164882 kubeadm.go:319] 
	W1217 00:45:18.730493 1164882 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-389537 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-389537 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000439084s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:45:18.730587 1164882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 00:45:19.157023 1164882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:45:19.170413 1164882 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:45:19.170469 1164882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:45:19.178508 1164882 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:45:19.178518 1164882 kubeadm.go:158] found existing configuration files:
	
	I1217 00:45:19.178576 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:45:19.186545 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:45:19.186601 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:45:19.194214 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:45:19.202207 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:45:19.202262 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:45:19.209983 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:45:19.217930 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:45:19.217995 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:45:19.225566 1164882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:45:19.233585 1164882 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:45:19.233643 1164882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:45:19.241419 1164882 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:45:19.279988 1164882 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:45:19.280290 1164882 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:45:19.357778 1164882 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:45:19.357844 1164882 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 00:45:19.357881 1164882 kubeadm.go:319] OS: Linux
	I1217 00:45:19.357925 1164882 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:45:19.357973 1164882 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:45:19.358020 1164882 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:45:19.358067 1164882 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:45:19.358114 1164882 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:45:19.358161 1164882 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:45:19.358205 1164882 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:45:19.358252 1164882 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:45:19.358299 1164882 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:45:19.428541 1164882 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:45:19.428678 1164882 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:45:19.428805 1164882 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:45:19.436597 1164882 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:45:19.440074 1164882 out.go:252]   - Generating certificates and keys ...
	I1217 00:45:19.440161 1164882 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:45:19.440225 1164882 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:45:19.440301 1164882 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:45:19.440361 1164882 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:45:19.440447 1164882 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:45:19.440698 1164882 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:45:19.440763 1164882 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:45:19.440896 1164882 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:45:19.441326 1164882 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:45:19.441639 1164882 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:45:19.441908 1164882 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:45:19.441965 1164882 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:45:20.082949 1164882 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:45:20.413805 1164882 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:45:20.641927 1164882 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:45:21.195522 1164882 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:45:21.349157 1164882 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:45:21.349840 1164882 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:45:21.353115 1164882 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:45:21.356374 1164882 out.go:252]   - Booting up control plane ...
	I1217 00:45:21.356476 1164882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:45:21.356557 1164882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:45:21.357191 1164882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:45:21.372409 1164882 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:45:21.372562 1164882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:45:21.380353 1164882 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:45:21.380701 1164882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:45:21.380913 1164882 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:45:21.508861 1164882 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:45:21.508974 1164882 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:49:21.509973 1164882 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001112257s
	I1217 00:49:21.509997 1164882 kubeadm.go:319] 
	I1217 00:49:21.510094 1164882 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:49:21.510150 1164882 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:49:21.510480 1164882 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:49:21.510489 1164882 kubeadm.go:319] 
	I1217 00:49:21.510670 1164882 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:49:21.510960 1164882 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:49:21.511012 1164882 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:49:21.511016 1164882 kubeadm.go:319] 
	I1217 00:49:21.516140 1164882 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:49:21.516635 1164882 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:49:21.516757 1164882 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:49:21.517040 1164882 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:49:21.517045 1164882 kubeadm.go:319] 
	I1217 00:49:21.517118 1164882 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:49:21.517176 1164882 kubeadm.go:403] duration metric: took 8m7.332816299s to StartCluster
	I1217 00:49:21.517204 1164882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:49:21.517272 1164882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:49:21.550604 1164882 cri.go:89] found id: ""
	I1217 00:49:21.550619 1164882 logs.go:282] 0 containers: []
	W1217 00:49:21.550626 1164882 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:49:21.550632 1164882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:49:21.550689 1164882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:49:21.581508 1164882 cri.go:89] found id: ""
	I1217 00:49:21.581521 1164882 logs.go:282] 0 containers: []
	W1217 00:49:21.581528 1164882 logs.go:284] No container was found matching "etcd"
	I1217 00:49:21.581533 1164882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:49:21.581593 1164882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:49:21.606857 1164882 cri.go:89] found id: ""
	I1217 00:49:21.606871 1164882 logs.go:282] 0 containers: []
	W1217 00:49:21.606877 1164882 logs.go:284] No container was found matching "coredns"
	I1217 00:49:21.606883 1164882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:49:21.606940 1164882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:49:21.635935 1164882 cri.go:89] found id: ""
	I1217 00:49:21.635949 1164882 logs.go:282] 0 containers: []
	W1217 00:49:21.635956 1164882 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:49:21.635961 1164882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:49:21.636022 1164882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:49:21.661632 1164882 cri.go:89] found id: ""
	I1217 00:49:21.661646 1164882 logs.go:282] 0 containers: []
	W1217 00:49:21.661654 1164882 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:49:21.661659 1164882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:49:21.661720 1164882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:49:21.686663 1164882 cri.go:89] found id: ""
	I1217 00:49:21.686679 1164882 logs.go:282] 0 containers: []
	W1217 00:49:21.686686 1164882 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:49:21.686691 1164882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:49:21.686752 1164882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:49:21.716047 1164882 cri.go:89] found id: ""
	I1217 00:49:21.716070 1164882 logs.go:282] 0 containers: []
	W1217 00:49:21.716077 1164882 logs.go:284] No container was found matching "kindnet"
	I1217 00:49:21.716086 1164882 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:49:21.716096 1164882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:49:21.782012 1164882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:49:21.772682    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.773375    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.775223    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.775808    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.777630    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:49:21.772682    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.773375    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.775223    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.775808    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:21.777630    4888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:49:21.782027 1164882 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:49:21.782038 1164882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:49:21.814129 1164882 logs.go:123] Gathering logs for container status ...
	I1217 00:49:21.814148 1164882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:49:21.845476 1164882 logs.go:123] Gathering logs for kubelet ...
	I1217 00:49:21.845492 1164882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:49:21.910262 1164882 logs.go:123] Gathering logs for dmesg ...
	I1217 00:49:21.910283 1164882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1217 00:49:21.930755 1164882 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112257s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:49:21.930803 1164882 out.go:285] * 
	W1217 00:49:21.930878 1164882 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112257s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:49:21.930949 1164882 out.go:285] * 
	W1217 00:49:21.933593 1164882 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:49:21.938834 1164882 out.go:203] 
	W1217 00:49:21.942094 1164882 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001112257s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:49:21.942213 1164882 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:49:21.942274 1164882 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:49:21.947082 1164882 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.78549788Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785533359Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785582687Z" level=info msg="Create NRI interface"
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785684453Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785692609Z" level=info msg="runtime interface created"
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785703111Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785709446Z" level=info msg="runtime interface starting up..."
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785715977Z" level=info msg="starting plugins..."
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785728177Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:41:12 functional-389537 crio[842]: time="2025-12-17T00:41:12.785785112Z" level=info msg="No systemd watchdog enabled"
	Dec 17 00:41:12 functional-389537 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 00:41:14 functional-389537 crio[842]: time="2025-12-17T00:41:14.494782806Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=19774056-22bb-4695-baca-0f85375a52e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:14 functional-389537 crio[842]: time="2025-12-17T00:41:14.497356047Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e43ef18b-7d39-44a5-926b-876bc022be26 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:14 functional-389537 crio[842]: time="2025-12-17T00:41:14.498028313Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=6cb411b6-d0d8-47db-9e5a-579194b90795 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:14 functional-389537 crio[842]: time="2025-12-17T00:41:14.498508517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d9504f56-d9c3-496d-9cd6-9ba453a32f26 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:14 functional-389537 crio[842]: time="2025-12-17T00:41:14.498957164Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=3b8c8163-cfe3-4b02-bf49-12e353d48532 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:14 functional-389537 crio[842]: time="2025-12-17T00:41:14.499444752Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=65df2552-1e17-4273-94b3-b6fa577e918d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:41:14 functional-389537 crio[842]: time="2025-12-17T00:41:14.499953386Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=e70b5152-286c-47cb-ba15-dfa18e34fa8c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:45:19 functional-389537 crio[842]: time="2025-12-17T00:45:19.431716686Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c845f99d-434c-4f09-a8aa-1d358ec2f616 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:45:19 functional-389537 crio[842]: time="2025-12-17T00:45:19.432613816Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce8496c5-e5d2-4244-b7c5-d376873fec87 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:45:19 functional-389537 crio[842]: time="2025-12-17T00:45:19.433137653Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=22232bff-63bc-461c-9d27-984d42bbcf5b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:45:19 functional-389537 crio[842]: time="2025-12-17T00:45:19.433796478Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=9d6dcc70-4a63-4d68-a11c-836613bdb775 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:45:19 functional-389537 crio[842]: time="2025-12-17T00:45:19.434370538Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4972cfcd-3873-4563-a28f-2433ac3971fb name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:45:19 functional-389537 crio[842]: time="2025-12-17T00:45:19.434948438Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9aca7d33-ee6f-49db-a8bd-b8cee152abf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:45:19 functional-389537 crio[842]: time="2025-12-17T00:45:19.435513374Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=79a7d4bc-a1d0-4181-8e35-d18a9b377c3a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:49:22.919153    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:22.919784    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:22.921688    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:22.922276    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:49:22.924029    5010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:37] overlayfs: idmapped layers are currently not supported
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:49:22 up  6:31,  0 user,  load average: 0.07, 0.42, 0.97
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:49:20 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:49:20 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 638.
	Dec 17 00:49:20 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:49:20 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:49:20 functional-389537 kubelet[4820]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:49:20 functional-389537 kubelet[4820]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:49:20 functional-389537 kubelet[4820]: E1217 00:49:20.800907    4820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:49:20 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:49:20 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:49:21 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 639.
	Dec 17 00:49:21 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:49:21 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:49:21 functional-389537 kubelet[4826]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:49:21 functional-389537 kubelet[4826]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:49:21 functional-389537 kubelet[4826]: E1217 00:49:21.565892    4826 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:49:21 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:49:21 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:49:22 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 17 00:49:22 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:49:22 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:49:22 functional-389537 kubelet[4925]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:49:22 functional-389537 kubelet[4925]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:49:22 functional-389537 kubelet[4925]: E1217 00:49:22.285989    4925 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:49:22 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:49:22 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 6 (338.1788ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:49:23.398087 1170696 status.go:458] kubeconfig endpoint: get endpoint: "functional-389537" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1217 00:49:23.413653 1136597 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389537 --alsologtostderr -v=8
E1217 00:50:07.912381 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:50:35.613135 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:51:45.354058 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:08.430417 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:55:07.912245 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389537 --alsologtostderr -v=8: exit status 80 (6m5.866125722s)

                                                
                                                
-- stdout --
	* [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:49:23.461389 1170766 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:49:23.461547 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461559 1170766 out.go:374] Setting ErrFile to fd 2...
	I1217 00:49:23.461579 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461900 1170766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:49:23.462303 1170766 out.go:368] Setting JSON to false
	I1217 00:49:23.463185 1170766 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23514,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:49:23.463289 1170766 start.go:143] virtualization:  
	I1217 00:49:23.466912 1170766 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:49:23.469855 1170766 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:49:23.469995 1170766 notify.go:221] Checking for updates...
	I1217 00:49:23.475916 1170766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:49:23.478779 1170766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:23.481739 1170766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:49:23.484668 1170766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:49:23.487521 1170766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:49:23.490907 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:23.491070 1170766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:49:23.524450 1170766 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:49:23.524610 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.580909 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.571176137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.581015 1170766 docker.go:319] overlay module found
	I1217 00:49:23.585845 1170766 out.go:179] * Using the docker driver based on existing profile
	I1217 00:49:23.588706 1170766 start.go:309] selected driver: docker
	I1217 00:49:23.588726 1170766 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.588842 1170766 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:49:23.588945 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.644593 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.634960306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.645010 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:23.645070 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:23.645127 1170766 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.648351 1170766 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:49:23.651037 1170766 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:49:23.653878 1170766 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:49:23.656858 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:23.656904 1170766 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:49:23.656917 1170766 cache.go:65] Caching tarball of preloaded images
	I1217 00:49:23.656980 1170766 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:49:23.657013 1170766 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:49:23.657024 1170766 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:49:23.657126 1170766 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:49:23.675917 1170766 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:49:23.675939 1170766 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:49:23.675960 1170766 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:49:23.675991 1170766 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:49:23.676062 1170766 start.go:364] duration metric: took 47.228µs to acquireMachinesLock for "functional-389537"
	I1217 00:49:23.676087 1170766 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:49:23.676097 1170766 fix.go:54] fixHost starting: 
	I1217 00:49:23.676360 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:23.693660 1170766 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:49:23.693691 1170766 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:49:23.696944 1170766 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:49:23.696988 1170766 machine.go:94] provisionDockerMachine start ...
	I1217 00:49:23.697095 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.714561 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.714904 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.714921 1170766 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:49:23.856040 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:23.856064 1170766 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:49:23.856128 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.875306 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.875626 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.875637 1170766 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:49:24.024137 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:24.024222 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.043436 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.043770 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.043794 1170766 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:49:24.176920 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:49:24.176960 1170766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:49:24.176987 1170766 ubuntu.go:190] setting up certificates
	I1217 00:49:24.177005 1170766 provision.go:84] configureAuth start
	I1217 00:49:24.177076 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:24.194508 1170766 provision.go:143] copyHostCerts
	I1217 00:49:24.194553 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194603 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:49:24.194616 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194693 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:49:24.194827 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194850 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:49:24.194859 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194890 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:49:24.194946 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.194967 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:49:24.194975 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.195000 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:49:24.195062 1170766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:49:24.401567 1170766 provision.go:177] copyRemoteCerts
	I1217 00:49:24.401643 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:49:24.401688 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.419163 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:24.516584 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:49:24.516654 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:49:24.535526 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:49:24.535590 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:49:24.556116 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:49:24.556181 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:49:24.575533 1170766 provision.go:87] duration metric: took 398.504828ms to configureAuth
	I1217 00:49:24.575561 1170766 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:49:24.575753 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:24.575856 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.593152 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.593467 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.593486 1170766 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:49:24.914611 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:49:24.914655 1170766 machine.go:97] duration metric: took 1.217656857s to provisionDockerMachine
	I1217 00:49:24.914668 1170766 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:49:24.914681 1170766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:49:24.914755 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:49:24.914823 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.935845 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.036750 1170766 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:49:25.040402 1170766 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:49:25.040450 1170766 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:49:25.040457 1170766 command_runner.go:130] > VERSION_ID="12"
	I1217 00:49:25.040461 1170766 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:49:25.040466 1170766 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:49:25.040470 1170766 command_runner.go:130] > ID=debian
	I1217 00:49:25.040475 1170766 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:49:25.040479 1170766 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:49:25.040485 1170766 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:49:25.040531 1170766 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:49:25.040571 1170766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:49:25.040583 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:49:25.040642 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:49:25.040724 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:49:25.040736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 00:49:25.040812 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:49:25.040822 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> /etc/test/nested/copy/1136597/hosts
	I1217 00:49:25.040875 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:49:25.048565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:25.066116 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:49:25.083960 1170766 start.go:296] duration metric: took 169.276161ms for postStartSetup
	I1217 00:49:25.084042 1170766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:49:25.084089 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.101382 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.193085 1170766 command_runner.go:130] > 18%
	I1217 00:49:25.193644 1170766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:49:25.197890 1170766 command_runner.go:130] > 160G
	I1217 00:49:25.198395 1170766 fix.go:56] duration metric: took 1.522293417s for fixHost
	I1217 00:49:25.198422 1170766 start.go:83] releasing machines lock for "functional-389537", held for 1.522344181s
	I1217 00:49:25.198491 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:25.216362 1170766 ssh_runner.go:195] Run: cat /version.json
	I1217 00:49:25.216396 1170766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:49:25.216449 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.216473 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.237434 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.266075 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.438053 1170766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:49:25.438122 1170766 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:49:25.438253 1170766 ssh_runner.go:195] Run: systemctl --version
	I1217 00:49:25.444320 1170766 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:49:25.444367 1170766 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:49:25.444850 1170766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:49:25.480454 1170766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:49:25.484847 1170766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:49:25.484904 1170766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:49:25.484962 1170766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:49:25.493012 1170766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:49:25.493039 1170766 start.go:496] detecting cgroup driver to use...
	I1217 00:49:25.493090 1170766 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:49:25.493156 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:49:25.508569 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:49:25.521635 1170766 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:49:25.521740 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:49:25.537766 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:49:25.551122 1170766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:49:25.669862 1170766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:49:25.789898 1170766 docker.go:234] disabling docker service ...
	I1217 00:49:25.789984 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:49:25.805401 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:49:25.818559 1170766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:49:25.946131 1170766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:49:26.093460 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:49:26.106879 1170766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:49:26.120278 1170766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 00:49:26.121659 1170766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:49:26.121720 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.130856 1170766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:49:26.130968 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.140092 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.149223 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.158222 1170766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:49:26.166662 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.176047 1170766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.184976 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.194179 1170766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:49:26.201960 1170766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:49:26.202030 1170766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:49:26.209746 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.327753 1170766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:49:26.499257 1170766 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:49:26.499380 1170766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:49:26.502956 1170766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 00:49:26.502992 1170766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:49:26.503000 1170766 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1217 00:49:26.503008 1170766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:26.503016 1170766 command_runner.go:130] > Access: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503022 1170766 command_runner.go:130] > Modify: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503035 1170766 command_runner.go:130] > Change: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503041 1170766 command_runner.go:130] >  Birth: -
	I1217 00:49:26.503359 1170766 start.go:564] Will wait 60s for crictl version
	I1217 00:49:26.503439 1170766 ssh_runner.go:195] Run: which crictl
	I1217 00:49:26.507311 1170766 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:49:26.507416 1170766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:49:26.531135 1170766 command_runner.go:130] > Version:  0.1.0
	I1217 00:49:26.531410 1170766 command_runner.go:130] > RuntimeName:  cri-o
	I1217 00:49:26.531606 1170766 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 00:49:26.531797 1170766 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:49:26.534036 1170766 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:49:26.534147 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.559497 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.559533 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.559539 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.559545 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.559550 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.559554 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.559558 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.559563 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.559567 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.559570 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.559574 1170766 command_runner.go:130] >      static
	I1217 00:49:26.559578 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.559582 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.559598 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.559608 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.559612 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.559615 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.559620 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.559632 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.559637 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.561572 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.587741 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.587775 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.587782 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.587787 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.587793 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.587846 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.587858 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.587864 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.587877 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.587887 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.587891 1170766 command_runner.go:130] >      static
	I1217 00:49:26.587894 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.587897 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.587919 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.587929 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.587935 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.587950 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.587961 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.587966 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.587971 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.594651 1170766 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:49:26.597589 1170766 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:49:26.614215 1170766 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:49:26.618047 1170766 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:49:26.618237 1170766 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:49:26.618355 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:26.618425 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.651766 1170766 command_runner.go:130] > {
	I1217 00:49:26.651794 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.651799 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651810 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.651814 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651830 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.651837 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651841 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651850 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.651859 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.651866 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651870 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.651874 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651881 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651884 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651887 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651894 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.651901 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651911 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.651914 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651918 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651926 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.651935 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.651948 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651953 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.651957 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651963 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651970 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651973 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651980 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.651986 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651991 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.651994 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651998 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652006 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.652014 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.652026 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652030 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.652034 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.652038 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652041 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652044 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652051 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.652057 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652062 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.652065 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652069 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652077 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.652087 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.652091 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652095 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.652106 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652118 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652122 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652131 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652135 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652156 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652165 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652183 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.652204 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652210 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.652215 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652219 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652227 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.652238 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.652242 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652246 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.652252 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652256 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652260 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652266 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652271 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652274 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652277 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652284 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.652289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652296 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.652302 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652305 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652313 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.652322 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.652329 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652333 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.652337 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652344 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652350 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652354 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652358 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652361 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652364 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652371 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.652379 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652407 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.652458 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652463 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652470 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.652478 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.652526 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652536 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.652557 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652564 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652567 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652570 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652577 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.652589 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652595 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.652598 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652605 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652615 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.652653 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.652661 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652666 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.652670 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652674 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652677 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652681 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652689 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652696 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652702 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652708 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.652712 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652717 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.652722 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652726 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652734 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.652741 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.652747 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652751 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.652755 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652761 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.652765 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652775 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652779 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.652782 1170766 command_runner.go:130] >     }
	I1217 00:49:26.652785 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.652790 1170766 command_runner.go:130] > }
	I1217 00:49:26.655303 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.655332 1170766 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:49:26.655388 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.678896 1170766 command_runner.go:130] > {
	I1217 00:49:26.678916 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.678921 1170766 command_runner.go:130] >     {
	I1217 00:49:26.678929 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.678933 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.678939 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.678942 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678946 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.678958 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.678968 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.678972 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678976 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.678980 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.678990 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679002 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679020 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679027 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.679030 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679036 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.679039 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679043 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679056 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.679065 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.679071 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679075 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.679079 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679091 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679098 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679101 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679107 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.679111 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679119 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.679122 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679127 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679135 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.679146 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.679149 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679153 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.679160 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.679164 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679169 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679172 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679179 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.679185 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679190 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.679194 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679199 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679215 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.679225 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.679228 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679233 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.679239 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679243 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679249 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679257 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679264 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679268 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679271 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679277 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.679289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679294 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.679297 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679301 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679309 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.679317 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.679328 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679333 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.679336 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679340 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679344 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679351 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679355 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679365 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679368 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679375 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.679378 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679387 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.679390 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679394 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679405 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.679419 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.679423 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679427 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.679438 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679442 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679445 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679449 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679455 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679459 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679462 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679471 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.679476 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679481 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.679486 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679491 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679501 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.679517 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.679521 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679525 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.679529 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679535 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679543 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679549 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679555 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.679560 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679568 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.679574 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679577 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679586 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.679605 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.679612 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679616 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.679619 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679626 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679629 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679633 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679637 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679640 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679643 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679649 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.679655 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679660 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.679672 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679676 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679683 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.679691 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.679698 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679703 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.679706 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679710 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.679713 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679717 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679721 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.679727 1170766 command_runner.go:130] >     }
	I1217 00:49:26.679730 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.679735 1170766 command_runner.go:130] > }
	I1217 00:49:26.682128 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.682152 1170766 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:49:26.682160 1170766 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:49:26.682270 1170766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:49:26.682351 1170766 ssh_runner.go:195] Run: crio config
	I1217 00:49:26.731730 1170766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 00:49:26.731754 1170766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 00:49:26.731761 1170766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 00:49:26.731764 1170766 command_runner.go:130] > #
	I1217 00:49:26.731771 1170766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 00:49:26.731778 1170766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 00:49:26.731784 1170766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 00:49:26.731801 1170766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 00:49:26.731808 1170766 command_runner.go:130] > # reload'.
	I1217 00:49:26.731815 1170766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 00:49:26.731836 1170766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 00:49:26.731843 1170766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 00:49:26.731849 1170766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 00:49:26.731853 1170766 command_runner.go:130] > [crio]
	I1217 00:49:26.731859 1170766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 00:49:26.731866 1170766 command_runner.go:130] > # containers images, in this directory.
	I1217 00:49:26.732568 1170766 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 00:49:26.732592 1170766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 00:49:26.733157 1170766 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 00:49:26.733176 1170766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 00:49:26.733597 1170766 command_runner.go:130] > # imagestore = ""
	I1217 00:49:26.733614 1170766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 00:49:26.733623 1170766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 00:49:26.734179 1170766 command_runner.go:130] > # storage_driver = "overlay"
	I1217 00:49:26.734196 1170766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 00:49:26.734204 1170766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 00:49:26.734478 1170766 command_runner.go:130] > # storage_option = [
	I1217 00:49:26.734782 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.734798 1170766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 00:49:26.734807 1170766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 00:49:26.735378 1170766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 00:49:26.735394 1170766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 00:49:26.735411 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 00:49:26.735422 1170766 command_runner.go:130] > # always happen on a node reboot
	I1217 00:49:26.735984 1170766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 00:49:26.736023 1170766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 00:49:26.736036 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 00:49:26.736041 1170766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 00:49:26.736536 1170766 command_runner.go:130] > # version_file_persist = ""
	I1217 00:49:26.736561 1170766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 00:49:26.736570 1170766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 00:49:26.737150 1170766 command_runner.go:130] > # internal_wipe = true
	I1217 00:49:26.737173 1170766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 00:49:26.737180 1170766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 00:49:26.737739 1170766 command_runner.go:130] > # internal_repair = true
	I1217 00:49:26.737758 1170766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 00:49:26.737766 1170766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 00:49:26.737772 1170766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 00:49:26.738332 1170766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 00:49:26.738352 1170766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 00:49:26.738356 1170766 command_runner.go:130] > [crio.api]
	I1217 00:49:26.738361 1170766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 00:49:26.738921 1170766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 00:49:26.738940 1170766 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 00:49:26.739496 1170766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 00:49:26.739517 1170766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 00:49:26.739523 1170766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 00:49:26.740074 1170766 command_runner.go:130] > # stream_port = "0"
	I1217 00:49:26.740093 1170766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 00:49:26.740679 1170766 command_runner.go:130] > # stream_enable_tls = false
	I1217 00:49:26.740700 1170766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 00:49:26.741116 1170766 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 00:49:26.741133 1170766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 00:49:26.741147 1170766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 00:49:26.741613 1170766 command_runner.go:130] > # stream_tls_cert = ""
	I1217 00:49:26.741629 1170766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 00:49:26.741636 1170766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 00:49:26.742076 1170766 command_runner.go:130] > # stream_tls_key = ""
	I1217 00:49:26.742092 1170766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 00:49:26.742107 1170766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 00:49:26.742117 1170766 command_runner.go:130] > # automatically pick up the changes.
	I1217 00:49:26.742632 1170766 command_runner.go:130] > # stream_tls_ca = ""
	I1217 00:49:26.742675 1170766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743308 1170766 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 00:49:26.743331 1170766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743950 1170766 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 00:49:26.743971 1170766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 00:49:26.743978 1170766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 00:49:26.743981 1170766 command_runner.go:130] > [crio.runtime]
	I1217 00:49:26.743988 1170766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 00:49:26.743996 1170766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 00:49:26.744007 1170766 command_runner.go:130] > # "nofile=1024:2048"
	I1217 00:49:26.744021 1170766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 00:49:26.744329 1170766 command_runner.go:130] > # default_ulimits = [
	I1217 00:49:26.744680 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.744702 1170766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 00:49:26.745338 1170766 command_runner.go:130] > # no_pivot = false
	I1217 00:49:26.745359 1170766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 00:49:26.745367 1170766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 00:49:26.745979 1170766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 00:49:26.746000 1170766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 00:49:26.746006 1170766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 00:49:26.746013 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.746484 1170766 command_runner.go:130] > # conmon = ""
	I1217 00:49:26.746503 1170766 command_runner.go:130] > # Cgroup setting for conmon
	I1217 00:49:26.746512 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 00:49:26.746837 1170766 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 00:49:26.746859 1170766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 00:49:26.746866 1170766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 00:49:26.746875 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.747181 1170766 command_runner.go:130] > # conmon_env = [
	I1217 00:49:26.747508 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.747529 1170766 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 00:49:26.747536 1170766 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 00:49:26.747545 1170766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 00:49:26.747848 1170766 command_runner.go:130] > # default_env = [
	I1217 00:49:26.748185 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.748200 1170766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 00:49:26.748210 1170766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 00:49:26.750925 1170766 command_runner.go:130] > # selinux = false
	I1217 00:49:26.750948 1170766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 00:49:26.750958 1170766 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 00:49:26.750964 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.751661 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.751677 1170766 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 00:49:26.751683 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752150 1170766 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 00:49:26.752167 1170766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 00:49:26.752181 1170766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 00:49:26.752191 1170766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 00:49:26.752216 1170766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 00:49:26.752224 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752873 1170766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 00:49:26.752894 1170766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 00:49:26.752932 1170766 command_runner.go:130] > # the cgroup blockio controller.
	I1217 00:49:26.753417 1170766 command_runner.go:130] > # blockio_config_file = ""
	I1217 00:49:26.753438 1170766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 00:49:26.753444 1170766 command_runner.go:130] > # blockio parameters.
	I1217 00:49:26.754055 1170766 command_runner.go:130] > # blockio_reload = false
	I1217 00:49:26.754079 1170766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 00:49:26.754084 1170766 command_runner.go:130] > # irqbalance daemon.
	I1217 00:49:26.754673 1170766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 00:49:26.754692 1170766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 00:49:26.754700 1170766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 00:49:26.754708 1170766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 00:49:26.755498 1170766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 00:49:26.755515 1170766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 00:49:26.755521 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.756018 1170766 command_runner.go:130] > # rdt_config_file = ""
	I1217 00:49:26.756034 1170766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 00:49:26.756360 1170766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 00:49:26.756381 1170766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 00:49:26.756895 1170766 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 00:49:26.756917 1170766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 00:49:26.756925 1170766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 00:49:26.756935 1170766 command_runner.go:130] > # will be added.
	I1217 00:49:26.757272 1170766 command_runner.go:130] > # default_capabilities = [
	I1217 00:49:26.757675 1170766 command_runner.go:130] > # 	"CHOWN",
	I1217 00:49:26.758010 1170766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 00:49:26.758348 1170766 command_runner.go:130] > # 	"FSETID",
	I1217 00:49:26.758682 1170766 command_runner.go:130] > # 	"FOWNER",
	I1217 00:49:26.759200 1170766 command_runner.go:130] > # 	"SETGID",
	I1217 00:49:26.759214 1170766 command_runner.go:130] > # 	"SETUID",
	I1217 00:49:26.759238 1170766 command_runner.go:130] > # 	"SETPCAP",
	I1217 00:49:26.759246 1170766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 00:49:26.759249 1170766 command_runner.go:130] > # 	"KILL",
	I1217 00:49:26.759253 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759261 1170766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 00:49:26.759273 1170766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 00:49:26.759278 1170766 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 00:49:26.759290 1170766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 00:49:26.759297 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759305 1170766 command_runner.go:130] > default_sysctls = [
	I1217 00:49:26.759310 1170766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 00:49:26.759312 1170766 command_runner.go:130] > ]
	I1217 00:49:26.759317 1170766 command_runner.go:130] > # List of devices on the host that a
	I1217 00:49:26.759323 1170766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 00:49:26.759327 1170766 command_runner.go:130] > # allowed_devices = [
	I1217 00:49:26.759331 1170766 command_runner.go:130] > # 	"/dev/fuse",
	I1217 00:49:26.759338 1170766 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 00:49:26.759341 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759347 1170766 command_runner.go:130] > # List of additional devices. specified as
	I1217 00:49:26.759358 1170766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 00:49:26.759363 1170766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 00:49:26.759373 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759377 1170766 command_runner.go:130] > # additional_devices = [
	I1217 00:49:26.759380 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759386 1170766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 00:49:26.759396 1170766 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 00:49:26.759406 1170766 command_runner.go:130] > # 	"/etc/cdi",
	I1217 00:49:26.759411 1170766 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 00:49:26.759414 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759421 1170766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 00:49:26.759446 1170766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 00:49:26.759454 1170766 command_runner.go:130] > # Defaults to false.
	I1217 00:49:26.759459 1170766 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 00:49:26.759466 1170766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 00:49:26.759476 1170766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 00:49:26.759480 1170766 command_runner.go:130] > # hooks_dir = [
	I1217 00:49:26.759486 1170766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 00:49:26.759490 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759496 1170766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 00:49:26.759505 1170766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 00:49:26.759511 1170766 command_runner.go:130] > # its default mounts from the following two files:
	I1217 00:49:26.759515 1170766 command_runner.go:130] > #
	I1217 00:49:26.759522 1170766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 00:49:26.759532 1170766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 00:49:26.759537 1170766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 00:49:26.759540 1170766 command_runner.go:130] > #
	I1217 00:49:26.759546 1170766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 00:49:26.759556 1170766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 00:49:26.759563 1170766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 00:49:26.759569 1170766 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 00:49:26.759578 1170766 command_runner.go:130] > #
	I1217 00:49:26.759582 1170766 command_runner.go:130] > # default_mounts_file = ""
	I1217 00:49:26.759588 1170766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 00:49:26.759595 1170766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 00:49:26.759599 1170766 command_runner.go:130] > # pids_limit = -1
	I1217 00:49:26.759609 1170766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 00:49:26.759619 1170766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 00:49:26.759625 1170766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 00:49:26.759634 1170766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 00:49:26.759644 1170766 command_runner.go:130] > # log_size_max = -1
	I1217 00:49:26.759653 1170766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 00:49:26.759660 1170766 command_runner.go:130] > # log_to_journald = false
	I1217 00:49:26.759666 1170766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 00:49:26.759671 1170766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 00:49:26.759676 1170766 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 00:49:26.759681 1170766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 00:49:26.759686 1170766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 00:49:26.759694 1170766 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 00:49:26.759700 1170766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 00:49:26.759704 1170766 command_runner.go:130] > # read_only = false
	I1217 00:49:26.759714 1170766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 00:49:26.759721 1170766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 00:49:26.759725 1170766 command_runner.go:130] > # live configuration reload.
	I1217 00:49:26.759734 1170766 command_runner.go:130] > # log_level = "info"
	I1217 00:49:26.759741 1170766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 00:49:26.759762 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.759770 1170766 command_runner.go:130] > # log_filter = ""
	I1217 00:49:26.759776 1170766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759782 1170766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 00:49:26.759790 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759801 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.759809 1170766 command_runner.go:130] > # uid_mappings = ""
	I1217 00:49:26.759815 1170766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759821 1170766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 00:49:26.759825 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759833 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761229 1170766 command_runner.go:130] > # gid_mappings = ""
	I1217 00:49:26.761253 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 00:49:26.761260 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761266 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761274 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761925 1170766 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 00:49:26.761952 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 00:49:26.761960 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761966 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761974 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.762609 1170766 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 00:49:26.762630 1170766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 00:49:26.762637 1170766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 00:49:26.762643 1170766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 00:49:26.763842 1170766 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 00:49:26.763856 1170766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 00:49:26.763864 1170766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 00:49:26.763869 1170766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 00:49:26.763873 1170766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 00:49:26.763878 1170766 command_runner.go:130] > # drop_infra_ctr = true
	I1217 00:49:26.763885 1170766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 00:49:26.763900 1170766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 00:49:26.763909 1170766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 00:49:26.763919 1170766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 00:49:26.763926 1170766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 00:49:26.763932 1170766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 00:49:26.763938 1170766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 00:49:26.763943 1170766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 00:49:26.763947 1170766 command_runner.go:130] > # shared_cpuset = ""
	I1217 00:49:26.763953 1170766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 00:49:26.763958 1170766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 00:49:26.763963 1170766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 00:49:26.763976 1170766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 00:49:26.763980 1170766 command_runner.go:130] > # pinns_path = ""
	I1217 00:49:26.763986 1170766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 00:49:26.764001 1170766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 00:49:26.764011 1170766 command_runner.go:130] > # enable_criu_support = true
	I1217 00:49:26.764017 1170766 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 00:49:26.764022 1170766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 00:49:26.764027 1170766 command_runner.go:130] > # enable_pod_events = false
	I1217 00:49:26.764033 1170766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 00:49:26.764043 1170766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 00:49:26.764047 1170766 command_runner.go:130] > # default_runtime = "crun"
	I1217 00:49:26.764053 1170766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 00:49:26.764064 1170766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 00:49:26.764077 1170766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 00:49:26.764086 1170766 command_runner.go:130] > # creation as a file is not desired either.
	I1217 00:49:26.764094 1170766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 00:49:26.764101 1170766 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 00:49:26.764105 1170766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 00:49:26.764108 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.764115 1170766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 00:49:26.764124 1170766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 00:49:26.764131 1170766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 00:49:26.764141 1170766 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 00:49:26.764144 1170766 command_runner.go:130] > #
	I1217 00:49:26.764149 1170766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 00:49:26.764154 1170766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 00:49:26.764162 1170766 command_runner.go:130] > # runtime_type = "oci"
	I1217 00:49:26.764167 1170766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 00:49:26.764172 1170766 command_runner.go:130] > # inherit_default_runtime = false
	I1217 00:49:26.764194 1170766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 00:49:26.764203 1170766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 00:49:26.764208 1170766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 00:49:26.764212 1170766 command_runner.go:130] > # monitor_env = []
	I1217 00:49:26.764217 1170766 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 00:49:26.764225 1170766 command_runner.go:130] > # allowed_annotations = []
	I1217 00:49:26.764231 1170766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 00:49:26.764239 1170766 command_runner.go:130] > # no_sync_log = false
	I1217 00:49:26.764246 1170766 command_runner.go:130] > # default_annotations = {}
	I1217 00:49:26.764250 1170766 command_runner.go:130] > # stream_websockets = false
	I1217 00:49:26.764254 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.764304 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.764313 1170766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 00:49:26.764320 1170766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 00:49:26.764331 1170766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 00:49:26.764338 1170766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 00:49:26.764341 1170766 command_runner.go:130] > #   in $PATH.
	I1217 00:49:26.764347 1170766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 00:49:26.764352 1170766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 00:49:26.764359 1170766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 00:49:26.764366 1170766 command_runner.go:130] > #   state.
	I1217 00:49:26.764376 1170766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 00:49:26.764387 1170766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 00:49:26.764393 1170766 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 00:49:26.764400 1170766 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 00:49:26.764409 1170766 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 00:49:26.764454 1170766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 00:49:26.764462 1170766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 00:49:26.764468 1170766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 00:49:26.764475 1170766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 00:49:26.764480 1170766 command_runner.go:130] > #   The currently recognized values are:
	I1217 00:49:26.764486 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 00:49:26.764494 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 00:49:26.764504 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 00:49:26.764515 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 00:49:26.764524 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 00:49:26.764532 1170766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 00:49:26.764539 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 00:49:26.764554 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 00:49:26.764565 1170766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 00:49:26.764575 1170766 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 00:49:26.764586 1170766 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 00:49:26.764592 1170766 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 00:49:26.764599 1170766 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 00:49:26.764605 1170766 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 00:49:26.764611 1170766 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 00:49:26.764620 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 00:49:26.764629 1170766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 00:49:26.764634 1170766 command_runner.go:130] > #   deprecated option "conmon".
	I1217 00:49:26.764642 1170766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 00:49:26.764650 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 00:49:26.764658 1170766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 00:49:26.764668 1170766 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 00:49:26.764675 1170766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 00:49:26.764680 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 00:49:26.764688 1170766 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 00:49:26.764692 1170766 command_runner.go:130] > #   conmon-rs by using:
	I1217 00:49:26.764705 1170766 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 00:49:26.764713 1170766 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 00:49:26.764724 1170766 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 00:49:26.764731 1170766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 00:49:26.764740 1170766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 00:49:26.764747 1170766 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 00:49:26.764755 1170766 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 00:49:26.764760 1170766 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 00:49:26.764769 1170766 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 00:49:26.764778 1170766 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 00:49:26.764783 1170766 command_runner.go:130] > #   when a machine crash happens.
	I1217 00:49:26.764794 1170766 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 00:49:26.764803 1170766 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 00:49:26.764814 1170766 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 00:49:26.764819 1170766 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 00:49:26.764831 1170766 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 00:49:26.764843 1170766 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 00:49:26.764845 1170766 command_runner.go:130] > #
	I1217 00:49:26.764850 1170766 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 00:49:26.764853 1170766 command_runner.go:130] > #
	I1217 00:49:26.764859 1170766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 00:49:26.764870 1170766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 00:49:26.764873 1170766 command_runner.go:130] > #
	I1217 00:49:26.764881 1170766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 00:49:26.764890 1170766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 00:49:26.764894 1170766 command_runner.go:130] > #
	I1217 00:49:26.764900 1170766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 00:49:26.764907 1170766 command_runner.go:130] > # feature.
	I1217 00:49:26.764910 1170766 command_runner.go:130] > #
	I1217 00:49:26.764916 1170766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 00:49:26.764922 1170766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 00:49:26.764928 1170766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 00:49:26.764934 1170766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 00:49:26.764944 1170766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 00:49:26.764947 1170766 command_runner.go:130] > #
	I1217 00:49:26.764953 1170766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 00:49:26.764963 1170766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 00:49:26.764966 1170766 command_runner.go:130] > #
	I1217 00:49:26.764972 1170766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 00:49:26.764981 1170766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 00:49:26.764984 1170766 command_runner.go:130] > #
	I1217 00:49:26.764991 1170766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 00:49:26.764997 1170766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 00:49:26.765000 1170766 command_runner.go:130] > # limitation.
	I1217 00:49:26.765005 1170766 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 00:49:26.765010 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 00:49:26.765015 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765019 1170766 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 00:49:26.765028 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765047 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765056 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765061 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765065 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765069 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765073 1170766 command_runner.go:130] > allowed_annotations = [
	I1217 00:49:26.765077 1170766 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 00:49:26.765080 1170766 command_runner.go:130] > ]
	I1217 00:49:26.765084 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765089 1170766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 00:49:26.765093 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 00:49:26.765096 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765101 1170766 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 00:49:26.765110 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765114 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765119 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765124 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765132 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765136 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765141 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765148 1170766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 00:49:26.765158 1170766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 00:49:26.765165 1170766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 00:49:26.765173 1170766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 00:49:26.765184 1170766 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 00:49:26.765195 1170766 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 00:49:26.765205 1170766 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 00:49:26.765212 1170766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 00:49:26.765226 1170766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 00:49:26.765235 1170766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 00:49:26.765244 1170766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 00:49:26.765251 1170766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 00:49:26.765254 1170766 command_runner.go:130] > # Example:
	I1217 00:49:26.765266 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 00:49:26.765271 1170766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 00:49:26.765283 1170766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 00:49:26.765288 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 00:49:26.765297 1170766 command_runner.go:130] > # cpuset = "0-1"
	I1217 00:49:26.765301 1170766 command_runner.go:130] > # cpushares = "5"
	I1217 00:49:26.765305 1170766 command_runner.go:130] > # cpuquota = "1000"
	I1217 00:49:26.765309 1170766 command_runner.go:130] > # cpuperiod = "100000"
	I1217 00:49:26.765312 1170766 command_runner.go:130] > # cpulimit = "35"
	I1217 00:49:26.765317 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.765321 1170766 command_runner.go:130] > # The workload name is workload-type.
	I1217 00:49:26.765337 1170766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 00:49:26.765342 1170766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 00:49:26.765348 1170766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 00:49:26.765357 1170766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 00:49:26.765362 1170766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 00:49:26.765372 1170766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 00:49:26.765378 1170766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 00:49:26.765388 1170766 command_runner.go:130] > # Default value is set to true
	I1217 00:49:26.765392 1170766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 00:49:26.765399 1170766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 00:49:26.765404 1170766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 00:49:26.765413 1170766 command_runner.go:130] > # Default value is set to 'false'
	I1217 00:49:26.765417 1170766 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 00:49:26.765422 1170766 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 00:49:26.765431 1170766 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 00:49:26.765434 1170766 command_runner.go:130] > # timezone = ""
	I1217 00:49:26.765440 1170766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 00:49:26.765444 1170766 command_runner.go:130] > #
	I1217 00:49:26.765450 1170766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 00:49:26.765460 1170766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 00:49:26.765464 1170766 command_runner.go:130] > [crio.image]
	I1217 00:49:26.765470 1170766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 00:49:26.765481 1170766 command_runner.go:130] > # default_transport = "docker://"
	I1217 00:49:26.765487 1170766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 00:49:26.765498 1170766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765502 1170766 command_runner.go:130] > # global_auth_file = ""
	I1217 00:49:26.765506 1170766 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 00:49:26.765512 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765517 1170766 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.765523 1170766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 00:49:26.765536 1170766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765541 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765550 1170766 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 00:49:26.765556 1170766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 00:49:26.765562 1170766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 00:49:26.765574 1170766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 00:49:26.765580 1170766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 00:49:26.765583 1170766 command_runner.go:130] > # pause_command = "/pause"
	I1217 00:49:26.765589 1170766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 00:49:26.765595 1170766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 00:49:26.765606 1170766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 00:49:26.765612 1170766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 00:49:26.765624 1170766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 00:49:26.765630 1170766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 00:49:26.765638 1170766 command_runner.go:130] > # pinned_images = [
	I1217 00:49:26.765641 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765647 1170766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 00:49:26.765654 1170766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 00:49:26.765667 1170766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 00:49:26.765673 1170766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 00:49:26.765682 1170766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 00:49:26.765687 1170766 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 00:49:26.765692 1170766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 00:49:26.765703 1170766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 00:49:26.765709 1170766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 00:49:26.765722 1170766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 00:49:26.765729 1170766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 00:49:26.765738 1170766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 00:49:26.765749 1170766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 00:49:26.765755 1170766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 00:49:26.765762 1170766 command_runner.go:130] > # changing them here.
	I1217 00:49:26.765771 1170766 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 00:49:26.765775 1170766 command_runner.go:130] > # insecure_registries = [
	I1217 00:49:26.765778 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765785 1170766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 00:49:26.765793 1170766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 00:49:26.765799 1170766 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 00:49:26.765805 1170766 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 00:49:26.765813 1170766 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 00:49:26.765819 1170766 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 00:49:26.765831 1170766 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 00:49:26.765835 1170766 command_runner.go:130] > # auto_reload_registries = false
	I1217 00:49:26.765842 1170766 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 00:49:26.765854 1170766 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 00:49:26.765860 1170766 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 00:49:26.765868 1170766 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 00:49:26.765872 1170766 command_runner.go:130] > # The mode of short name resolution.
	I1217 00:49:26.765879 1170766 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 00:49:26.765891 1170766 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 00:49:26.765899 1170766 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 00:49:26.765908 1170766 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 00:49:26.765914 1170766 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 00:49:26.765920 1170766 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 00:49:26.765924 1170766 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 00:49:26.765930 1170766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 00:49:26.765933 1170766 command_runner.go:130] > # CNI plugins.
	I1217 00:49:26.765942 1170766 command_runner.go:130] > [crio.network]
	I1217 00:49:26.765948 1170766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 00:49:26.765958 1170766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 00:49:26.765965 1170766 command_runner.go:130] > # cni_default_network = ""
	I1217 00:49:26.765972 1170766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 00:49:26.765976 1170766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 00:49:26.765982 1170766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 00:49:26.765989 1170766 command_runner.go:130] > # plugin_dirs = [
	I1217 00:49:26.765992 1170766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 00:49:26.765995 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765999 1170766 command_runner.go:130] > # List of included pod metrics.
	I1217 00:49:26.766003 1170766 command_runner.go:130] > # included_pod_metrics = [
	I1217 00:49:26.766006 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766012 1170766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 00:49:26.766015 1170766 command_runner.go:130] > [crio.metrics]
	I1217 00:49:26.766020 1170766 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 00:49:26.766031 1170766 command_runner.go:130] > # enable_metrics = false
	I1217 00:49:26.766037 1170766 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 00:49:26.766046 1170766 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 00:49:26.766053 1170766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 00:49:26.766061 1170766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 00:49:26.766070 1170766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 00:49:26.766074 1170766 command_runner.go:130] > # metrics_collectors = [
	I1217 00:49:26.766078 1170766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 00:49:26.766083 1170766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 00:49:26.766087 1170766 command_runner.go:130] > # 	"containers_oom_total",
	I1217 00:49:26.766090 1170766 command_runner.go:130] > # 	"processes_defunct",
	I1217 00:49:26.766094 1170766 command_runner.go:130] > # 	"operations_total",
	I1217 00:49:26.766099 1170766 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 00:49:26.766103 1170766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 00:49:26.766107 1170766 command_runner.go:130] > # 	"operations_errors_total",
	I1217 00:49:26.766111 1170766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 00:49:26.766116 1170766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 00:49:26.766120 1170766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 00:49:26.766123 1170766 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 00:49:26.766131 1170766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 00:49:26.766140 1170766 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 00:49:26.766144 1170766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 00:49:26.766149 1170766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 00:49:26.766160 1170766 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 00:49:26.766163 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766169 1170766 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 00:49:26.766173 1170766 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 00:49:26.766178 1170766 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 00:49:26.766182 1170766 command_runner.go:130] > # metrics_port = 9090
	I1217 00:49:26.766187 1170766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 00:49:26.766195 1170766 command_runner.go:130] > # metrics_socket = ""
	I1217 00:49:26.766200 1170766 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 00:49:26.766206 1170766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 00:49:26.766216 1170766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 00:49:26.766221 1170766 command_runner.go:130] > # certificate on any modification event.
	I1217 00:49:26.766224 1170766 command_runner.go:130] > # metrics_cert = ""
	I1217 00:49:26.766230 1170766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 00:49:26.766239 1170766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 00:49:26.766243 1170766 command_runner.go:130] > # metrics_key = ""
	I1217 00:49:26.766249 1170766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 00:49:26.766252 1170766 command_runner.go:130] > [crio.tracing]
	I1217 00:49:26.766257 1170766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 00:49:26.766261 1170766 command_runner.go:130] > # enable_tracing = false
	I1217 00:49:26.766266 1170766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 00:49:26.766270 1170766 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 00:49:26.766277 1170766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 00:49:26.766287 1170766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 00:49:26.766292 1170766 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 00:49:26.766295 1170766 command_runner.go:130] > [crio.nri]
	I1217 00:49:26.766300 1170766 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 00:49:26.766308 1170766 command_runner.go:130] > # enable_nri = true
	I1217 00:49:26.766312 1170766 command_runner.go:130] > # NRI socket to listen on.
	I1217 00:49:26.766320 1170766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 00:49:26.766324 1170766 command_runner.go:130] > # NRI plugin directory to use.
	I1217 00:49:26.766328 1170766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 00:49:26.766333 1170766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 00:49:26.766338 1170766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 00:49:26.766343 1170766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 00:49:26.766396 1170766 command_runner.go:130] > # nri_disable_connections = false
	I1217 00:49:26.766406 1170766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 00:49:26.766411 1170766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 00:49:26.766416 1170766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 00:49:26.766420 1170766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 00:49:26.766425 1170766 command_runner.go:130] > # NRI default validator configuration.
	I1217 00:49:26.766431 1170766 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 00:49:26.766438 1170766 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 00:49:26.766447 1170766 command_runner.go:130] > # can be restricted/rejected:
	I1217 00:49:26.766451 1170766 command_runner.go:130] > # - OCI hook injection
	I1217 00:49:26.766456 1170766 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 00:49:26.766466 1170766 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 00:49:26.766471 1170766 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 00:49:26.766475 1170766 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 00:49:26.766486 1170766 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 00:49:26.766493 1170766 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 00:49:26.766498 1170766 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 00:49:26.766501 1170766 command_runner.go:130] > #
	I1217 00:49:26.766505 1170766 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 00:49:26.766509 1170766 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 00:49:26.766519 1170766 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 00:49:26.766525 1170766 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 00:49:26.766531 1170766 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 00:49:26.766540 1170766 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 00:49:26.766545 1170766 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 00:49:26.766550 1170766 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 00:49:26.766558 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766567 1170766 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 00:49:26.766574 1170766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 00:49:26.766579 1170766 command_runner.go:130] > [crio.stats]
	I1217 00:49:26.766584 1170766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 00:49:26.766590 1170766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 00:49:26.766597 1170766 command_runner.go:130] > # stats_collection_period = 0
	I1217 00:49:26.766603 1170766 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 00:49:26.766610 1170766 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 00:49:26.766618 1170766 command_runner.go:130] > # collection_period = 0
	I1217 00:49:26.769313 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.709999291Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 00:49:26.769335 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710041801Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 00:49:26.769350 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.7100717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 00:49:26.769358 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710096963Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 00:49:26.769367 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710182557Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.769376 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710452795Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 00:49:26.769388 1170766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 00:49:26.769780 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:26.769799 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:26.769817 1170766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:49:26.769847 1170766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:49:26.769980 1170766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:49:26.770057 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:49:26.777246 1170766 command_runner.go:130] > kubeadm
	I1217 00:49:26.777268 1170766 command_runner.go:130] > kubectl
	I1217 00:49:26.777274 1170766 command_runner.go:130] > kubelet
	I1217 00:49:26.778436 1170766 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:49:26.778500 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:49:26.786236 1170766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:49:26.799825 1170766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:49:26.813059 1170766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 00:49:26.828019 1170766 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:49:26.831670 1170766 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:49:26.831993 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.960014 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:27.502236 1170766 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:49:27.502256 1170766 certs.go:195] generating shared ca certs ...
	I1217 00:49:27.502272 1170766 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:27.502407 1170766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:49:27.502457 1170766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:49:27.502465 1170766 certs.go:257] generating profile certs ...
	I1217 00:49:27.502566 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:49:27.502627 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:49:27.502667 1170766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:49:27.502675 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:49:27.502694 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:49:27.502705 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:49:27.502716 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:49:27.502725 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:49:27.502736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:49:27.502746 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:49:27.502759 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:49:27.502805 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:49:27.502840 1170766 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:49:27.502848 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:49:27.502873 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:49:27.502896 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:49:27.502918 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:49:27.502963 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:27.502994 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.503007 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.503017 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.503565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:49:27.523390 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:49:27.542159 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:49:27.560122 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:49:27.578247 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:49:27.596258 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:49:27.613943 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:49:27.632292 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:49:27.650819 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:49:27.669066 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:49:27.687617 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:49:27.705744 1170766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:49:27.719458 1170766 ssh_runner.go:195] Run: openssl version
	I1217 00:49:27.725722 1170766 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:49:27.726120 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.733628 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:49:27.741335 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745236 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745284 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745341 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.786230 1170766 command_runner.go:130] > 51391683
	I1217 00:49:27.786728 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:49:27.794669 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.802040 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:49:27.809799 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813741 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813839 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813906 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.854690 1170766 command_runner.go:130] > 3ec20f2e
	I1217 00:49:27.854778 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:49:27.862235 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.869424 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:49:27.877608 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881295 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881338 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881389 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.921808 1170766 command_runner.go:130] > b5213941
	I1217 00:49:27.922298 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:49:27.929684 1170766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933543 1170766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933568 1170766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:49:27.933576 1170766 command_runner.go:130] > Device: 259,1	Inode: 3648879     Links: 1
	I1217 00:49:27.933583 1170766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:27.933589 1170766 command_runner.go:130] > Access: 2025-12-17 00:45:19.435586201 +0000
	I1217 00:49:27.933595 1170766 command_runner.go:130] > Modify: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933600 1170766 command_runner.go:130] > Change: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933605 1170766 command_runner.go:130] >  Birth: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933682 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:49:27.974244 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:27.974730 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:49:28.015269 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.015758 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:49:28.065826 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.066538 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:49:28.108358 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.108531 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:49:28.149181 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.149647 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:49:28.190353 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.190474 1170766 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:28.190584 1170766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:49:28.190665 1170766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:49:28.221145 1170766 cri.go:89] found id: ""
	I1217 00:49:28.221267 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:49:28.228507 1170766 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:49:28.228597 1170766 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:49:28.228619 1170766 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:49:28.229395 1170766 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:49:28.229438 1170766 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:49:28.229512 1170766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:49:28.236906 1170766 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:49:28.237356 1170766 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389537" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.237502 1170766 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389537" cluster setting kubeconfig missing "functional-389537" context setting]
	I1217 00:49:28.237796 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.238221 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.238396 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.238920 1170766 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:49:28.238939 1170766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:49:28.238945 1170766 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:49:28.238950 1170766 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:49:28.238954 1170766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:49:28.238995 1170766 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:49:28.239224 1170766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:49:28.246965 1170766 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:49:28.247039 1170766 kubeadm.go:602] duration metric: took 17.573937ms to restartPrimaryControlPlane
	I1217 00:49:28.247066 1170766 kubeadm.go:403] duration metric: took 56.597633ms to StartCluster
	I1217 00:49:28.247104 1170766 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.247179 1170766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.247837 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.248043 1170766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:49:28.248489 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:28.248569 1170766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:49:28.248676 1170766 addons.go:70] Setting storage-provisioner=true in profile "functional-389537"
	I1217 00:49:28.248696 1170766 addons.go:239] Setting addon storage-provisioner=true in "functional-389537"
	I1217 00:49:28.248719 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.249218 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.251024 1170766 addons.go:70] Setting default-storageclass=true in profile "functional-389537"
	I1217 00:49:28.251049 1170766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389537"
	I1217 00:49:28.251367 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.254651 1170766 out.go:179] * Verifying Kubernetes components...
	I1217 00:49:28.257533 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:28.287633 1170766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:49:28.290502 1170766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.290526 1170766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:49:28.290609 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.312501 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.312677 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.312998 1170766 addons.go:239] Setting addon default-storageclass=true in "functional-389537"
	I1217 00:49:28.313045 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.313499 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.334272 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.347658 1170766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:28.347681 1170766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:49:28.347742 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.374030 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.486040 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:28.502536 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.510858 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.252938 1170766 node_ready.go:35] waiting up to 6m0s for node "functional-389537" to be "Ready" ...
	I1217 00:49:29.253062 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.253118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.253338 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253370 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253391 1170766 retry.go:31] will retry after 245.662002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253435 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253452 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253459 1170766 retry.go:31] will retry after 276.192706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253512 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.500088 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:29.530677 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.579588 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.579743 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.579792 1170766 retry.go:31] will retry after 478.611243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607395 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.607453 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607473 1170766 retry.go:31] will retry after 213.763614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.753751 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.822424 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.886054 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.886099 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.886150 1170766 retry.go:31] will retry after 580.108639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.059411 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.142412 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.142520 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.142548 1170766 retry.go:31] will retry after 335.340669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.253845 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.254297 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.466582 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:30.478378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.546834 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.546919 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.546953 1170766 retry.go:31] will retry after 1.248601584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557846 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.557940 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557983 1170766 retry.go:31] will retry after 1.081200972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.753182 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.253427 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.253542 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.253954 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:31.639465 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:31.698941 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.698993 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.699013 1170766 retry.go:31] will retry after 1.870151971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.754126 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.754197 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.754530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.795965 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:31.861932 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.861982 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.862003 1170766 retry.go:31] will retry after 1.008225242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.253184 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.253372 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.253717 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.753360 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.871155 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:32.928211 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:32.931741 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.931825 1170766 retry.go:31] will retry after 1.349013392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.253256 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.569378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:33.627393 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:33.631136 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.631170 1170766 retry.go:31] will retry after 1.556307432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.753384 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.753462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.753732 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.753786 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.253674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.281872 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:34.338860 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:34.338952 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.338994 1170766 retry.go:31] will retry after 2.730785051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.753261 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.753705 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.188371 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:35.253305 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.253379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.253659 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:35.253682 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253699 1170766 retry.go:31] will retry after 4.092845301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253755 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:36.253666 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.753252 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.753327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.070065 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:37.127098 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:37.130934 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.130970 1170766 retry.go:31] will retry after 4.776908541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.253166 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.753194 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.753659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.253587 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.253946 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.254001 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.753912 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.753994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.754371 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.254004 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.254408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.346816 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:39.407133 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:39.411576 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.411608 1170766 retry.go:31] will retry after 4.420378296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.753168 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.753277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.753541 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.253304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.753271 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.753349 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.753656 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.753707 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:41.253157 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.253546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.909084 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:41.968890 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:41.968925 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:41.968945 1170766 retry.go:31] will retry after 4.028082996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:42.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.253706 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.753164 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.753238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.753522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.253354 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.253724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:43.253792 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.753558 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.753644 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.753949 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.832189 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:43.890902 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:43.894375 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:43.894408 1170766 retry.go:31] will retry after 8.166287631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:44.253620 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.753652 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.753996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.253708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.254080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:45.254153 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.753590 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.753659 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.753909 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.997293 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:46.061414 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:46.061451 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.061470 1170766 retry.go:31] will retry after 11.083982648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.253886 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.253962 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.254309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.754095 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.754205 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.754534 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.253185 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.253531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.753195 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.753675 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:48.253335 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.253411 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.253779 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.753583 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.753654 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.253646 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.254063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.753928 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.754007 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.754325 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.754377 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.253612 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.253695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.253960 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.753804 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.753885 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.254063 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.254137 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.254480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.753480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.060996 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:52.120691 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:52.124209 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.124248 1170766 retry.go:31] will retry after 5.294346985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.253619 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.254054 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.753693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.753855 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.754194 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.254037 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.254206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.254462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.254510 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.753239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.753523 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.253651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.753370 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.753449 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.753783 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.253341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.753617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.753681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.146315 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:57.205486 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.209162 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.209194 1170766 retry.go:31] will retry after 16.847278069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.253385 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.253754 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.419134 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:57.479419 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.482994 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.483029 1170766 retry.go:31] will retry after 11.356263683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.753493 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.253330 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.253407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.753639 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.753716 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.754093 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.754160 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.753887 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.754215 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.253724 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.253810 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.254155 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.754120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.754206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.754562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:00.754621 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.253240 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.253370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.253698 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.253607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.253193 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.253613 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.753572 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.754045 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.253947 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.254268 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.253850 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.254364 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:05.754125 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.754208 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.754551 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.253164 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.253237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.253346 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.253428 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.253751 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.753540 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.753830 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:07.753881 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.253424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.253762 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.753666 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.753745 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.754125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.840442 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:08.894240 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:08.898223 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:08.898257 1170766 retry.go:31] will retry after 31.216976051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:09.253588 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.253672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.753741 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.754120 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:09.754170 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.253935 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.254009 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.253844 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.254271 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.754088 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.754175 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.754499 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:11.754558 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:12.253187 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.253522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.753227 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.753589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.753701 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.057576 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:14.115415 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:14.119129 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.119165 1170766 retry.go:31] will retry after 28.147339136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.253462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.253544 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.253877 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.253932 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:14.753601 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.753672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.753968 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.253641 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.253732 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.253997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.753777 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.253982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:16.254362 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:16.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.754016 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.253840 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.254281 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.754086 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.754162 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.754503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.253672 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.753651 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.753736 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.754062 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:18.754120 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:19.253943 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.254033 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.254372 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.753082 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.753159 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.753506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.753388 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.753479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.753884 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.253955 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.254007 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:21.753781 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.753865 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.254001 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.254355 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.753077 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.753153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.753404 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.253112 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.253188 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.253528 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.753620 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:23.753996 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.253660 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.253733 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.254004 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.753783 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.753862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.754204 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.253869 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.253944 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.254293 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:25.754034 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:26.253773 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.253845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.753983 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.754381 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.253979 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.753096 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.753176 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.753474 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.253306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:28.753591 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.753916 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.253231 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.753237 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.753688 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.753320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:30.753699 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.253635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.753306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.753379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.753638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.253669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.753350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.753691 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:32.753743 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.253478 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.253794 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.753653 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.754080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.253900 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.254314 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.754008 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:34.754052 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.253945 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.254265 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.753720 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.754034 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.253598 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.753708 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.753783 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.754104 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:36.754165 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:37.253918 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.253995 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.254311 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.753961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.253926 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.254006 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.254296 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.754122 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.754199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.754549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:38.754615 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:39.253269 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.253710 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.753180 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.753267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.753624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.116186 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:40.183350 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:40.183412 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.183435 1170766 retry.go:31] will retry after 25.382750455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.253664 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.254066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.753634 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.753706 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.753966 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.253718 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.253791 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.254134 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:41.254188 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:41.754033 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.754109 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.754488 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.253178 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.253257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.253626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.266982 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:42.344498 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:42.344537 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.344558 1170766 retry.go:31] will retry after 17.409313592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.753120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.753194 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.253776 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.253851 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.753822 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.753901 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.754256 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.253756 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.253922 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.254427 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.753326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:46.753299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.753383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.253287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.753623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.253662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.253705 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:48.753669 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.753752 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.754072 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.253894 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.253970 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.254291 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.753636 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.253843 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.253926 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.254289 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:50.254345 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:50.754111 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.754190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.754553 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.253242 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:52.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.754044 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.754422 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.253188 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.753632 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:54.753689 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.253391 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.253469 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.753512 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.753582 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.753864 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.253390 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:57.753200 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.753283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.753631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.253436 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.253523 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.253931 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.753948 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.754017 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.754272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.254035 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.254118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.254476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.254537 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:59.753199 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.754864 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:59.815839 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815879 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815961 1170766 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:00.253363 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.753302 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.753369 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.753727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:01.753787 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.253347 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.253689 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.753247 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.753324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.753665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.754077 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:03.754136 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.253779 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.254148 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.753646 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.753717 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.753978 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.253862 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.253937 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.254272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.566658 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:51:05.627909 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.627957 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.628043 1170766 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:05.631111 1170766 out.go:179] * Enabled addons: 
	I1217 00:51:05.634718 1170766 addons.go:530] duration metric: took 1m37.386158891s for enable addons: enabled=[]
	I1217 00:51:05.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.753674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.253279 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.253356 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.253651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:06.753202 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.753286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.753613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.253337 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.253416 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.753382 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.753456 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.753719 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.253394 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:08.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:08.753597 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.753675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.754006 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.253704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.753759 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.754219 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.254036 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.254117 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.254443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:10.254499 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:10.753146 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.753222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.753504 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.253431 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.253508 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.253817 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.753238 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:12.753661 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:13.253229 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.753608 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.753242 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.753314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.753606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.253289 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.253371 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.253681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:15.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.753291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.253602 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.253661 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:17.253717 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:17.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.753577 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.253297 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.253364 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.253668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.753853 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.754277 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.254102 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.254185 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.254526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:19.254586 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:19.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.753311 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.753580 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.253722 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.753652 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.253372 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.253701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.753406 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.753495 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:21.753874 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:22.253257 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.753561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.753603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.753685 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.754925 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1217 00:51:23.754986 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:24.253170 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.253267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.253617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.753328 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.753409 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.753746 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.253469 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.253546 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.253880 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.753657 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.753917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.253603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.253711 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.254049 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:26.254102 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:26.753618 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.753694 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.253707 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.753801 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.754135 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.253730 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.253819 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.254157 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:28.254213 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:28.754062 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.754150 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.754428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.253246 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.753316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.753701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.753680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:30.753758 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.253283 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.753617 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.753891 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.253582 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.753873 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.753956 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.754335 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:32.754410 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:33.253082 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.253153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.253408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.753211 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.253332 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.253414 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.253813 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.753517 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.753595 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.753879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.253210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.253725 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:35.753393 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.753476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.753815 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.253180 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.253769 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.253245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.253568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.753118 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.753199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.753448 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:37.753489 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.253352 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.253435 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.253790 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.753633 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.753713 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.754052 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.253630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.253702 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.254026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.754056 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:39.754113 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.253723 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.253798 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.254106 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.754024 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.253834 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.253927 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.254334 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.754152 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.754231 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.754552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:41.754611 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:42.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.753201 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.253361 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.253440 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.753589 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.753665 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.253738 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.253820 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.254118 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.254169 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:44.753956 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.754034 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.754376 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.253875 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.253954 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.254382 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.753128 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.753548 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.253245 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.253330 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.753570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:46.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.253226 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.253657 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.753364 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.753750 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.253392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.753652 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.753737 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.754073 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:48.754130 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.253766 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.253847 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.254210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.753704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.253788 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.253862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.254182 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.753997 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.754076 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.754412 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:50.754497 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:51.253162 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.253230 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.753249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.753596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.753309 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.753387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.753660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.253234 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.253323 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.253702 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:53.253761 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:53.753655 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.753749 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.754112 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.253936 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.753647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.253232 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.253310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.753558 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:55.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:56.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.253610 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.753338 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.253172 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.253533 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.753301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.753667 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:57.753734 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:58.253323 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.753599 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.753674 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.253780 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.253867 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.254242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.754384 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.754441 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:00.261843 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.262054 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.262449 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.753175 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.253252 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.253251 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.253328 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.253683 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:02.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.753677 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.253422 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.753719 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.753793 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:04.254346 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.753969 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.253791 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.253873 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.254220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.753910 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.753984 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.754315 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.253622 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.253718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.254014 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.753813 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.753893 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.754190 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.754244 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:07.254039 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.254467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.753167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.753245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.753517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.253390 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.753749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.753834 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.754171 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.253662 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.253741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.254087 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:09.254142 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:09.753914 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.753986 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.754327 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.254174 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.254257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.254595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.753297 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.753370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.253408 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.253499 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.253838 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.753526 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.753601 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.753894 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:11.753949 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:12.253560 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.253632 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.753805 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.754169 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.253986 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.254075 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.254435 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.754109 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.754186 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.754492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:13.754550 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:14.253219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.253640 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.753663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.253555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.753256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.753575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:16.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.253273 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:16.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:16.753300 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.753651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.253388 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.253700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.753205 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:18.253374 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.253447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:18.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:18.753640 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.754063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.253884 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.253974 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.753695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:20.253726 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.253806 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.254124 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:20.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:20.753974 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.754048 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.754388 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.253158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.253440 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.253238 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.253660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:23.253247 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:23.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.253272 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.253348 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.253619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:25.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.253630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:25.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:25.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.753591 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.253192 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.253269 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.753310 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.753396 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.253223 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.253476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.753141 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.753220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:27.753593 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:28.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.253455 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.253810 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:28.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.753915 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.754546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.253345 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.753321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:29.753656 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:30.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.253247 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.253502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:30.753270 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.753355 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.753724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.753227 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.753572 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:32.253250 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:32.253716 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:32.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.253201 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.253278 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.753973 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:34.253749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.253821 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.254108 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:34.254159 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:34.753679 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.753775 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.253885 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.253959 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.754073 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.754148 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.754487 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.753378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.753774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:36.753831 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:37.253510 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.253591 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.253957 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:37.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.253981 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.254058 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.753581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:39.253264 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:39.253650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:39.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.253402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.253743 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.753429 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.753503 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.753767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:41.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:41.253687 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:41.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.753639 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.253183 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.253286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.253225 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.253637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.753531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:43.753579 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:44.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.253576 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:44.753186 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.753264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.753599 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.253295 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.253735 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.753614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:45.753668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:46.253322 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.253398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:46.753161 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.753496 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.253291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:47.753697 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:48.253324 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:48.753549 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.753624 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.253784 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.753644 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.753723 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.754017 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:49.754065 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:50.253810 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.254239 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:50.753899 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.753975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.754306 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.253987 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.753830 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.753910 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.754242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:51.754311 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:52.254071 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.254149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.254484 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:52.753615 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.753942 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.253691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.254010 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.753953 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.754027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.754345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:53.754402 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.253938 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:54.753755 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.753827 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.754137 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.253941 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.254028 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.254370 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.754085 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.754158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:55.754529 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:56.253088 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.253170 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.253491 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.753629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.253537 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.753221 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:58.253261 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.253670 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:58.253729 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:58.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.754036 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.253988 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.254385 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.753169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:00.255875 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.256036 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.256356 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:00.256590 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:00.753314 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.753406 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.753729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.253451 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.253526 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.253836 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.753275 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.753592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.253224 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.753389 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:02.753739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:03.253378 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.253737 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:03.753753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.753845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.754210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.253955 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.254035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.753974 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:04.754016 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:05.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.254027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:05.753103 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.753190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.753552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.253106 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.253183 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.253481 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.753270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.753579 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:07.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.253288 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.253606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:07.253665 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:07.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.753237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.753615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.253519 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.253592 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.253905 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.754029 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.754407 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:09.253610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.253927 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:09.253968 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:09.753621 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.253736 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.253811 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.254126 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.753682 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.753989 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:11.253783 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:11.254252 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:11.754017 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.754095 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.754418 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.253174 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.253431 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.753169 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.753584 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.253725 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.753680 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.753954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:13.753997 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:14.253722 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.253802 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.254151 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:14.753815 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.753891 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.754223 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.254029 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.753806 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.753888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.754227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:15.754287 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:16.254074 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.254151 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.254498 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:16.753147 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.753225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.753479 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.253581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.753255 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:18.253273 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.253344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.253604 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:18.253646 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:18.753564 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.753634 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.253242 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.753337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:20.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:20.253718 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:20.753425 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.753514 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.753897 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.253583 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.753341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.753692 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.253625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.753263 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.753343 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:23.253236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.253636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:23.753622 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.754022 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.253690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.753690 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.753765 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:24.754119 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:25.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.253969 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.254295 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:25.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.253798 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.253879 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.254195 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.754019 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.754098 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.754443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:26.754501 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.253228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:27.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.753266 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.253426 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.253518 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.253857 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.753672 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.753767 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:29.254090 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.254181 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.254562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:29.254618 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:29.753292 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.753381 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.753726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.253402 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.253471 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.753408 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.753487 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.753850 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.753559 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:31.753600 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:32.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:32.755552 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.755633 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.755956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.253924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.753903 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.753982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.754307 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:33.754366 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:34.254124 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.254211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.254539 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:34.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.753398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:36.253412 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.253489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.253839 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:36.253891 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:36.753198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.753274 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.253727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.753428 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.753500 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.753749 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:38.253689 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.253766 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.254125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:38.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:38.753984 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.754059 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.754410 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.253122 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.253198 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.253459 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.753151 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.753259 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.753585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.253413 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.253767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.753533 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:40.753859 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:41.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.253596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:41.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.753268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.753605 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.253303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:43.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.253419 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:43.254022 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:43.753920 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.754014 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.754333 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.253118 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.253201 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.253526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:45.255002 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.255152 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.255478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:45.255533 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.753317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.253364 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.253796 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.753240 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.753574 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.753402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.753748 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:47.753808 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:48.253313 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:48.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.754069 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.253753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.253830 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.254168 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.753643 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.753731 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.754066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:49.754148 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:50.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.253994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:50.753099 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.753189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.253251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.253515 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:52.253411 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.253511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.253890 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:52.253964 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:52.753645 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.753719 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.253775 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.254202 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.754104 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.754180 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.754506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.253165 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.253239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.253494 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:54.753683 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:55.253358 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.253438 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.253774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:55.753173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.253177 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.253263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.253600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.753404 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:56.753805 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:57.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.253238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.253497 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.253572 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.253908 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.753944 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:58.753983 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:59.253741 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.253823 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.254166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:59.753959 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.754035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.253101 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.253195 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.753249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.753333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:01.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.253476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.253809 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:01.253884 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:01.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.753357 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.253331 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.253412 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.253739 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.753476 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.753557 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.753921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:03.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:03.253961 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:03.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.253243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.753367 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.753380 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.753466 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.753795 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:05.753852 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:06.253248 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:06.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.253321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.753244 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.753502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:08.253274 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.253352 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.253726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:08.253781 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:08.753770 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.753843 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.754162 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.253596 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.253675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.253945 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.753821 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.753904 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.754197 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:10.254043 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.254442 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:10.254495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:10.753142 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.753213 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.753467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.753382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.753753 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.253233 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.753305 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:12.753685 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:13.253376 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.253460 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.253784 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:13.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.753691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.253819 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.253898 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.254259 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.754072 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.754149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.754478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:14.754538 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:15.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.253248 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.253513 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:15.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.253764 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.753262 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:17.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.253350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.253713 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:17.253779 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:17.753480 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.753569 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.253664 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.753923 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.754002 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.754397 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.253225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.753254 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:19.753636 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:20.253334 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:20.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:21.753680 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:22.253524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.254279 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:22.753625 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.753972 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.253809 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.253888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.254196 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.754021 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.754101 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.754439 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:23.754495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:24.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.253250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.253623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:24.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.753607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.253403 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.253757 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.753263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.753530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:26.253258 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.253351 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.253693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:26.253746 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:26.753414 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.753490 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.753826 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.253253 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.753244 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.753673 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:28.253404 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.253479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.253776 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:28.253819 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:28.753595 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.753935 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.253381 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.253465 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.253954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.753737 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.753815 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:30.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:30.253995 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:30.753581 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.753666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.753956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.253745 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.253824 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.254143 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.753606 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.754026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:32.253830 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.254262 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:32.254319 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:32.754091 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.754169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.754555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.253132 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.253222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.753524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.753608 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.753895 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.253698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.753696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.753951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:34.753991 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:35.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.253882 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.254227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:35.754036 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.754112 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.754409 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.253093 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.253164 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.253416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.753271 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:37.253294 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.253378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.253664 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:37.253713 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:37.753375 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.253304 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.253376 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.753592 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.754003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:39.253608 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.253678 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.253933 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:39.253982 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:39.753743 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.753818 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.754166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.253997 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.254396 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.753116 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.253645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.753348 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.753424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.753761 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:41.753817 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:42.265137 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.265218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.265549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:42.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.753653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.253379 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.253788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.753627 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.753708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:44.253832 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.254217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:44.754035 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.754111 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.754446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.253168 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.753612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:46.253316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.253442 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:46.253773 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:46.753434 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.753511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.753766 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.253200 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.253277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.253570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.753267 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.753344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.753625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:48.253552 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.253626 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.253879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:48.253930 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:48.753836 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.753911 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.754217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.254026 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.254106 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.753598 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.753686 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:50.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.254209 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:50.254259 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:50.754039 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.754125 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.253140 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.253209 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.253462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.753185 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.253618 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.753250 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.753598 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:52.753651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:53.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:53.753664 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.753741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.754081 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.253591 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.253669 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.254015 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.753866 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.753946 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.754274 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:54.754329 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:55.254057 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.254131 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.254446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:55.753121 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.753211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.753456 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.253253 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.253557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:57.253260 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:57.253672 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:57.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.753392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.253532 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.253606 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.253910 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:59.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.253799 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.254130 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:59.254190 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:59.753954 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.754031 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.754326 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.260359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.261314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.266189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.753848 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:01.253975 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.254047 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:01.254396 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:01.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.753698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.753979 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.253768 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.253842 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.254146 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.753881 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.754220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.253637 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.253710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.253982 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.753913 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.753997 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.754309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:03.754367 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:04.254115 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.254189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.254536 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:04.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.753161 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.753416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.253139 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.253218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.253585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.753162 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.753568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:06.253362 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.253441 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:06.253744 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.753378 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.753454 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.753700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:08.253293 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.253374 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.253718 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:08.253778 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:08.753537 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.753616 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.253654 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.253730 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.254027 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.753808 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.753883 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.754221 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:10.254047 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.254124 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.254490 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:10.254545 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:10.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.753567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.253638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.753650 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.253202 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.253270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.253527 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:12.753698 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:13.253182 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.253256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.253592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:13.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.753477 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.753394 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.753489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.753829 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:14.753892 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:15.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:15.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.753586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.253207 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.753176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.753503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:17.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:17.253694 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:17.753223 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.253586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.753702 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.753779 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.754110 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:19.253939 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.254018 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.254367 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:19.254421 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:19.754123 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.754196 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.754517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.253190 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.753740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.253432 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.253502 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.253792 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.753215 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.753636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:21.753701 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:22.253195 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.253276 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:22.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.253220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.253589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:24.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.253665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:24.253703 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:24.753359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.753447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.753788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.253481 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.253571 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.253917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.753617 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.753635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:26.753691 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:27.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:27.753345 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.753799 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.253586 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.253666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.253996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.753669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:28.753726 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:29.253176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:29.253235 1170766 node_ready.go:38] duration metric: took 6m0.000252571s for node "functional-389537" to be "Ready" ...
	I1217 00:55:29.256355 1170766 out.go:203] 
	W1217 00:55:29.259198 1170766 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:55:29.259223 1170766 out.go:285] * 
	* 
	W1217 00:55:29.261375 1170766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:55:29.264098 1170766 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-389537 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.606220337s for "functional-389537" cluster.
I1217 00:55:30.019956 1136597 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (341.524659ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 logs -n 25: (1.006300495s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-099267 image rm kicbase/echo-server:functional-099267 --alsologtostderr                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image save --daemon kicbase/echo-server:functional-099267 --alsologtostderr                                                     │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/test/nested/copy/1136597/hosts                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/1136597.pem                                                                                         │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /usr/share/ca-certificates/1136597.pem                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/11365972.pem                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                           │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /usr/share/ca-certificates/11365972.pem                                                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                           │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                           │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format short --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh pgrep buildkitd                                                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │                     │
	│ image          │ functional-099267 image ls --format yaml --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format json --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format table --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete         │ -p functional-099267                                                                                                                              │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ start          │ -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ start          │ -p functional-389537 --alsologtostderr -v=8                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:49 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:49:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:49:23.461389 1170766 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:49:23.461547 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461559 1170766 out.go:374] Setting ErrFile to fd 2...
	I1217 00:49:23.461579 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461900 1170766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:49:23.462303 1170766 out.go:368] Setting JSON to false
	I1217 00:49:23.463185 1170766 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23514,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:49:23.463289 1170766 start.go:143] virtualization:  
	I1217 00:49:23.466912 1170766 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:49:23.469855 1170766 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:49:23.469995 1170766 notify.go:221] Checking for updates...
	I1217 00:49:23.475916 1170766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:49:23.478779 1170766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:23.481739 1170766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:49:23.484668 1170766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:49:23.487521 1170766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:49:23.490907 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:23.491070 1170766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:49:23.524450 1170766 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:49:23.524610 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.580909 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.571176137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.581015 1170766 docker.go:319] overlay module found
	I1217 00:49:23.585845 1170766 out.go:179] * Using the docker driver based on existing profile
	I1217 00:49:23.588706 1170766 start.go:309] selected driver: docker
	I1217 00:49:23.588726 1170766 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.588842 1170766 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:49:23.588945 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.644593 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.634960306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.645010 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:23.645070 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:23.645127 1170766 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.648351 1170766 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:49:23.651037 1170766 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:49:23.653878 1170766 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:49:23.656858 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:23.656904 1170766 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:49:23.656917 1170766 cache.go:65] Caching tarball of preloaded images
	I1217 00:49:23.656980 1170766 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:49:23.657013 1170766 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:49:23.657024 1170766 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:49:23.657126 1170766 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:49:23.675917 1170766 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:49:23.675939 1170766 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:49:23.675960 1170766 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:49:23.675991 1170766 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:49:23.676062 1170766 start.go:364] duration metric: took 47.228µs to acquireMachinesLock for "functional-389537"
	I1217 00:49:23.676087 1170766 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:49:23.676097 1170766 fix.go:54] fixHost starting: 
	I1217 00:49:23.676360 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:23.693660 1170766 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:49:23.693691 1170766 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:49:23.696944 1170766 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:49:23.696988 1170766 machine.go:94] provisionDockerMachine start ...
	I1217 00:49:23.697095 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.714561 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.714904 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.714921 1170766 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:49:23.856040 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:23.856064 1170766 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:49:23.856128 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.875306 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.875626 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.875637 1170766 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:49:24.024137 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:24.024222 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.043436 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.043770 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.043794 1170766 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:49:24.176920 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:49:24.176960 1170766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:49:24.176987 1170766 ubuntu.go:190] setting up certificates
	I1217 00:49:24.177005 1170766 provision.go:84] configureAuth start
	I1217 00:49:24.177076 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:24.194508 1170766 provision.go:143] copyHostCerts
	I1217 00:49:24.194553 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194603 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:49:24.194616 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194693 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:49:24.194827 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194850 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:49:24.194859 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194890 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:49:24.194946 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.194967 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:49:24.194975 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.195000 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:49:24.195062 1170766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:49:24.401567 1170766 provision.go:177] copyRemoteCerts
	I1217 00:49:24.401643 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:49:24.401688 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.419163 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:24.516584 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:49:24.516654 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:49:24.535526 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:49:24.535590 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:49:24.556116 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:49:24.556181 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:49:24.575533 1170766 provision.go:87] duration metric: took 398.504828ms to configureAuth
	I1217 00:49:24.575561 1170766 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:49:24.575753 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:24.575856 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.593152 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.593467 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.593486 1170766 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:49:24.914611 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:49:24.914655 1170766 machine.go:97] duration metric: took 1.217656857s to provisionDockerMachine
	I1217 00:49:24.914668 1170766 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:49:24.914681 1170766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:49:24.914755 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:49:24.914823 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.935845 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.036750 1170766 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:49:25.040402 1170766 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:49:25.040450 1170766 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:49:25.040457 1170766 command_runner.go:130] > VERSION_ID="12"
	I1217 00:49:25.040461 1170766 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:49:25.040466 1170766 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:49:25.040470 1170766 command_runner.go:130] > ID=debian
	I1217 00:49:25.040475 1170766 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:49:25.040479 1170766 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:49:25.040485 1170766 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:49:25.040531 1170766 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:49:25.040571 1170766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:49:25.040583 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:49:25.040642 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:49:25.040724 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:49:25.040736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 00:49:25.040812 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:49:25.040822 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> /etc/test/nested/copy/1136597/hosts
	I1217 00:49:25.040875 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:49:25.048565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:25.066116 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:49:25.083960 1170766 start.go:296] duration metric: took 169.276161ms for postStartSetup
	I1217 00:49:25.084042 1170766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:49:25.084089 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.101382 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.193085 1170766 command_runner.go:130] > 18%
	I1217 00:49:25.193644 1170766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:49:25.197890 1170766 command_runner.go:130] > 160G
	I1217 00:49:25.198395 1170766 fix.go:56] duration metric: took 1.522293417s for fixHost
	I1217 00:49:25.198422 1170766 start.go:83] releasing machines lock for "functional-389537", held for 1.522344181s
	I1217 00:49:25.198491 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:25.216362 1170766 ssh_runner.go:195] Run: cat /version.json
	I1217 00:49:25.216396 1170766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:49:25.216449 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.216473 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.237434 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.266075 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.438053 1170766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:49:25.438122 1170766 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:49:25.438253 1170766 ssh_runner.go:195] Run: systemctl --version
	I1217 00:49:25.444320 1170766 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:49:25.444367 1170766 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:49:25.444850 1170766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:49:25.480454 1170766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:49:25.484847 1170766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:49:25.484904 1170766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:49:25.484962 1170766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:49:25.493012 1170766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:49:25.493039 1170766 start.go:496] detecting cgroup driver to use...
	I1217 00:49:25.493090 1170766 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:49:25.493156 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:49:25.508569 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:49:25.521635 1170766 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:49:25.521740 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:49:25.537766 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:49:25.551122 1170766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:49:25.669862 1170766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:49:25.789898 1170766 docker.go:234] disabling docker service ...
	I1217 00:49:25.789984 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:49:25.805401 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:49:25.818559 1170766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:49:25.946131 1170766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:49:26.093460 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:49:26.106879 1170766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:49:26.120278 1170766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 00:49:26.121659 1170766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:49:26.121720 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.130856 1170766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:49:26.130968 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.140092 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.149223 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.158222 1170766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:49:26.166662 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.176047 1170766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.184976 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.194179 1170766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:49:26.201960 1170766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:49:26.202030 1170766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:49:26.209746 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.327753 1170766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:49:26.499257 1170766 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:49:26.499380 1170766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:49:26.502956 1170766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 00:49:26.502992 1170766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:49:26.503000 1170766 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1217 00:49:26.503008 1170766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:26.503016 1170766 command_runner.go:130] > Access: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503022 1170766 command_runner.go:130] > Modify: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503035 1170766 command_runner.go:130] > Change: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503041 1170766 command_runner.go:130] >  Birth: -
	I1217 00:49:26.503359 1170766 start.go:564] Will wait 60s for crictl version
	I1217 00:49:26.503439 1170766 ssh_runner.go:195] Run: which crictl
	I1217 00:49:26.507311 1170766 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:49:26.507416 1170766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:49:26.531135 1170766 command_runner.go:130] > Version:  0.1.0
	I1217 00:49:26.531410 1170766 command_runner.go:130] > RuntimeName:  cri-o
	I1217 00:49:26.531606 1170766 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 00:49:26.531797 1170766 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:49:26.534036 1170766 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:49:26.534147 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.559497 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.559533 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.559539 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.559545 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.559550 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.559554 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.559558 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.559563 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.559567 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.559570 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.559574 1170766 command_runner.go:130] >      static
	I1217 00:49:26.559578 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.559582 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.559598 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.559608 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.559612 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.559615 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.559620 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.559632 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.559637 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.561572 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.587741 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.587775 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.587782 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.587787 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.587793 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.587846 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.587858 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.587864 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.587877 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.587887 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.587891 1170766 command_runner.go:130] >      static
	I1217 00:49:26.587894 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.587897 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.587919 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.587929 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.587935 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.587950 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.587961 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.587966 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.587971 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.594651 1170766 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:49:26.597589 1170766 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:49:26.614215 1170766 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:49:26.618047 1170766 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:49:26.618237 1170766 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:49:26.618355 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:26.618425 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.651766 1170766 command_runner.go:130] > {
	I1217 00:49:26.651794 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.651799 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651810 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.651814 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651830 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.651837 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651841 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651850 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.651859 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.651866 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651870 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.651874 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651881 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651884 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651887 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651894 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.651901 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651911 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.651914 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651918 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651926 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.651935 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.651948 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651953 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.651957 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651963 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651970 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651973 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651980 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.651986 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651991 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.651994 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651998 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652006 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.652014 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.652026 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652030 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.652034 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.652038 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652041 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652044 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652051 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.652057 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652062 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.652065 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652069 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652077 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.652087 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.652091 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652095 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.652106 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652118 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652122 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652131 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652135 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652156 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652165 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652183 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.652204 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652210 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.652215 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652219 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652227 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.652238 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.652242 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652246 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.652252 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652256 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652260 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652266 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652271 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652274 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652277 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652284 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.652289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652296 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.652302 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652305 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652313 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.652322 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.652329 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652333 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.652337 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652344 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652350 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652354 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652358 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652361 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652364 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652371 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.652379 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652407 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.652458 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652463 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652470 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.652478 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.652526 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652536 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.652557 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652564 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652567 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652570 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652577 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.652589 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652595 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.652598 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652605 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652615 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.652653 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.652661 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652666 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.652670 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652674 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652677 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652681 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652689 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652696 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652702 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652708 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.652712 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652717 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.652722 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652726 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652734 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.652741 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.652747 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652751 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.652755 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652761 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.652765 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652775 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652779 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.652782 1170766 command_runner.go:130] >     }
	I1217 00:49:26.652785 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.652790 1170766 command_runner.go:130] > }
	I1217 00:49:26.655303 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.655332 1170766 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:49:26.655388 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.678896 1170766 command_runner.go:130] > {
	I1217 00:49:26.678916 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.678921 1170766 command_runner.go:130] >     {
	I1217 00:49:26.678929 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.678933 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.678939 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.678942 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678946 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.678958 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.678968 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.678972 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678976 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.678980 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.678990 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679002 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679020 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679027 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.679030 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679036 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.679039 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679043 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679056 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.679065 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.679071 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679075 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.679079 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679091 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679098 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679101 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679107 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.679111 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679119 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.679122 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679127 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679135 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.679146 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.679149 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679153 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.679160 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.679164 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679169 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679172 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679179 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.679185 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679190 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.679194 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679199 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679215 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.679225 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.679228 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679233 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.679239 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679243 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679249 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679257 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679264 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679268 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679271 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679277 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.679289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679294 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.679297 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679301 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679309 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.679317 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.679328 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679333 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.679336 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679340 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679344 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679351 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679355 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679365 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679368 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679375 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.679378 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679387 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.679390 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679394 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679405 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.679419 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.679423 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679427 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.679438 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679442 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679445 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679449 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679455 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679459 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679462 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679471 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.679476 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679481 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.679486 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679491 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679501 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.679517 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.679521 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679525 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.679529 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679535 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679543 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679549 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679555 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.679560 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679568 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.679574 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679577 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679586 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.679605 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.679612 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679616 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.679619 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679626 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679629 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679633 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679637 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679640 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679643 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679649 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.679655 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679660 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.679672 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679676 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679683 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.679691 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.679698 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679703 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.679706 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679710 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.679713 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679717 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679721 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.679727 1170766 command_runner.go:130] >     }
	I1217 00:49:26.679730 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.679735 1170766 command_runner.go:130] > }
	I1217 00:49:26.682128 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.682152 1170766 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:49:26.682160 1170766 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:49:26.682270 1170766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:49:26.682351 1170766 ssh_runner.go:195] Run: crio config
	I1217 00:49:26.731730 1170766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 00:49:26.731754 1170766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 00:49:26.731761 1170766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 00:49:26.731764 1170766 command_runner.go:130] > #
	I1217 00:49:26.731771 1170766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 00:49:26.731778 1170766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 00:49:26.731784 1170766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 00:49:26.731801 1170766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 00:49:26.731808 1170766 command_runner.go:130] > # reload'.
	I1217 00:49:26.731815 1170766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 00:49:26.731836 1170766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 00:49:26.731843 1170766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 00:49:26.731849 1170766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 00:49:26.731853 1170766 command_runner.go:130] > [crio]
	I1217 00:49:26.731859 1170766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 00:49:26.731866 1170766 command_runner.go:130] > # containers images, in this directory.
	I1217 00:49:26.732568 1170766 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 00:49:26.732592 1170766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 00:49:26.733157 1170766 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 00:49:26.733176 1170766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 00:49:26.733597 1170766 command_runner.go:130] > # imagestore = ""
	I1217 00:49:26.733614 1170766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 00:49:26.733623 1170766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 00:49:26.734179 1170766 command_runner.go:130] > # storage_driver = "overlay"
	I1217 00:49:26.734196 1170766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 00:49:26.734204 1170766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 00:49:26.734478 1170766 command_runner.go:130] > # storage_option = [
	I1217 00:49:26.734782 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.734798 1170766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 00:49:26.734807 1170766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 00:49:26.735378 1170766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 00:49:26.735394 1170766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 00:49:26.735411 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 00:49:26.735422 1170766 command_runner.go:130] > # always happen on a node reboot
	I1217 00:49:26.735984 1170766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 00:49:26.736023 1170766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 00:49:26.736036 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 00:49:26.736041 1170766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 00:49:26.736536 1170766 command_runner.go:130] > # version_file_persist = ""
	I1217 00:49:26.736561 1170766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 00:49:26.736570 1170766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 00:49:26.737150 1170766 command_runner.go:130] > # internal_wipe = true
	I1217 00:49:26.737173 1170766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 00:49:26.737180 1170766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 00:49:26.737739 1170766 command_runner.go:130] > # internal_repair = true
	I1217 00:49:26.737758 1170766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 00:49:26.737766 1170766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 00:49:26.737772 1170766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 00:49:26.738332 1170766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 00:49:26.738352 1170766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 00:49:26.738356 1170766 command_runner.go:130] > [crio.api]
	I1217 00:49:26.738361 1170766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 00:49:26.738921 1170766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 00:49:26.738940 1170766 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 00:49:26.739496 1170766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 00:49:26.739517 1170766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 00:49:26.739523 1170766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 00:49:26.740074 1170766 command_runner.go:130] > # stream_port = "0"
	I1217 00:49:26.740093 1170766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 00:49:26.740679 1170766 command_runner.go:130] > # stream_enable_tls = false
	I1217 00:49:26.740700 1170766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 00:49:26.741116 1170766 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 00:49:26.741133 1170766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 00:49:26.741147 1170766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 00:49:26.741613 1170766 command_runner.go:130] > # stream_tls_cert = ""
	I1217 00:49:26.741629 1170766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 00:49:26.741636 1170766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 00:49:26.742076 1170766 command_runner.go:130] > # stream_tls_key = ""
	I1217 00:49:26.742092 1170766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 00:49:26.742107 1170766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 00:49:26.742117 1170766 command_runner.go:130] > # automatically pick up the changes.
	I1217 00:49:26.742632 1170766 command_runner.go:130] > # stream_tls_ca = ""
	I1217 00:49:26.742675 1170766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743308 1170766 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 00:49:26.743331 1170766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743950 1170766 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 00:49:26.743971 1170766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 00:49:26.743978 1170766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 00:49:26.743981 1170766 command_runner.go:130] > [crio.runtime]
	I1217 00:49:26.743988 1170766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 00:49:26.743996 1170766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 00:49:26.744007 1170766 command_runner.go:130] > # "nofile=1024:2048"
	I1217 00:49:26.744021 1170766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 00:49:26.744329 1170766 command_runner.go:130] > # default_ulimits = [
	I1217 00:49:26.744680 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.744702 1170766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 00:49:26.745338 1170766 command_runner.go:130] > # no_pivot = false
	I1217 00:49:26.745359 1170766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 00:49:26.745367 1170766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 00:49:26.745979 1170766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 00:49:26.746000 1170766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 00:49:26.746006 1170766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 00:49:26.746013 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.746484 1170766 command_runner.go:130] > # conmon = ""
	I1217 00:49:26.746503 1170766 command_runner.go:130] > # Cgroup setting for conmon
	I1217 00:49:26.746512 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 00:49:26.746837 1170766 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 00:49:26.746859 1170766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 00:49:26.746866 1170766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 00:49:26.746875 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.747181 1170766 command_runner.go:130] > # conmon_env = [
	I1217 00:49:26.747508 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.747529 1170766 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 00:49:26.747536 1170766 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 00:49:26.747545 1170766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 00:49:26.747848 1170766 command_runner.go:130] > # default_env = [
	I1217 00:49:26.748185 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.748200 1170766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 00:49:26.748210 1170766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 00:49:26.750925 1170766 command_runner.go:130] > # selinux = false
	I1217 00:49:26.750948 1170766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 00:49:26.750958 1170766 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 00:49:26.750964 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.751661 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.751677 1170766 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 00:49:26.751683 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752150 1170766 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 00:49:26.752167 1170766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 00:49:26.752181 1170766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 00:49:26.752191 1170766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 00:49:26.752216 1170766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 00:49:26.752224 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752873 1170766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 00:49:26.752894 1170766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 00:49:26.752932 1170766 command_runner.go:130] > # the cgroup blockio controller.
	I1217 00:49:26.753417 1170766 command_runner.go:130] > # blockio_config_file = ""
	I1217 00:49:26.753438 1170766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 00:49:26.753444 1170766 command_runner.go:130] > # blockio parameters.
	I1217 00:49:26.754055 1170766 command_runner.go:130] > # blockio_reload = false
	I1217 00:49:26.754079 1170766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 00:49:26.754084 1170766 command_runner.go:130] > # irqbalance daemon.
	I1217 00:49:26.754673 1170766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 00:49:26.754692 1170766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 00:49:26.754700 1170766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 00:49:26.754708 1170766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 00:49:26.755498 1170766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 00:49:26.755515 1170766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 00:49:26.755521 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.756018 1170766 command_runner.go:130] > # rdt_config_file = ""
	I1217 00:49:26.756034 1170766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 00:49:26.756360 1170766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 00:49:26.756381 1170766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 00:49:26.756895 1170766 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 00:49:26.756917 1170766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 00:49:26.756925 1170766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 00:49:26.756935 1170766 command_runner.go:130] > # will be added.
	I1217 00:49:26.757272 1170766 command_runner.go:130] > # default_capabilities = [
	I1217 00:49:26.757675 1170766 command_runner.go:130] > # 	"CHOWN",
	I1217 00:49:26.758010 1170766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 00:49:26.758348 1170766 command_runner.go:130] > # 	"FSETID",
	I1217 00:49:26.758682 1170766 command_runner.go:130] > # 	"FOWNER",
	I1217 00:49:26.759200 1170766 command_runner.go:130] > # 	"SETGID",
	I1217 00:49:26.759214 1170766 command_runner.go:130] > # 	"SETUID",
	I1217 00:49:26.759238 1170766 command_runner.go:130] > # 	"SETPCAP",
	I1217 00:49:26.759246 1170766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 00:49:26.759249 1170766 command_runner.go:130] > # 	"KILL",
	I1217 00:49:26.759253 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759261 1170766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 00:49:26.759273 1170766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 00:49:26.759278 1170766 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 00:49:26.759290 1170766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 00:49:26.759297 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759305 1170766 command_runner.go:130] > default_sysctls = [
	I1217 00:49:26.759310 1170766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 00:49:26.759312 1170766 command_runner.go:130] > ]
	I1217 00:49:26.759317 1170766 command_runner.go:130] > # List of devices on the host that a
	I1217 00:49:26.759323 1170766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 00:49:26.759327 1170766 command_runner.go:130] > # allowed_devices = [
	I1217 00:49:26.759331 1170766 command_runner.go:130] > # 	"/dev/fuse",
	I1217 00:49:26.759338 1170766 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 00:49:26.759341 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759347 1170766 command_runner.go:130] > # List of additional devices. specified as
	I1217 00:49:26.759358 1170766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 00:49:26.759363 1170766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 00:49:26.759373 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759377 1170766 command_runner.go:130] > # additional_devices = [
	I1217 00:49:26.759380 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759386 1170766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 00:49:26.759396 1170766 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 00:49:26.759406 1170766 command_runner.go:130] > # 	"/etc/cdi",
	I1217 00:49:26.759411 1170766 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 00:49:26.759414 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759421 1170766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 00:49:26.759446 1170766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 00:49:26.759454 1170766 command_runner.go:130] > # Defaults to false.
	I1217 00:49:26.759459 1170766 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 00:49:26.759466 1170766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 00:49:26.759476 1170766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 00:49:26.759480 1170766 command_runner.go:130] > # hooks_dir = [
	I1217 00:49:26.759486 1170766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 00:49:26.759490 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759496 1170766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 00:49:26.759505 1170766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 00:49:26.759511 1170766 command_runner.go:130] > # its default mounts from the following two files:
	I1217 00:49:26.759515 1170766 command_runner.go:130] > #
	I1217 00:49:26.759522 1170766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 00:49:26.759532 1170766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 00:49:26.759537 1170766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 00:49:26.759540 1170766 command_runner.go:130] > #
	I1217 00:49:26.759546 1170766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 00:49:26.759556 1170766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 00:49:26.759563 1170766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 00:49:26.759569 1170766 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 00:49:26.759578 1170766 command_runner.go:130] > #
	I1217 00:49:26.759582 1170766 command_runner.go:130] > # default_mounts_file = ""
	I1217 00:49:26.759588 1170766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 00:49:26.759595 1170766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 00:49:26.759599 1170766 command_runner.go:130] > # pids_limit = -1
	I1217 00:49:26.759609 1170766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 00:49:26.759619 1170766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 00:49:26.759625 1170766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 00:49:26.759634 1170766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 00:49:26.759644 1170766 command_runner.go:130] > # log_size_max = -1
	I1217 00:49:26.759653 1170766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 00:49:26.759660 1170766 command_runner.go:130] > # log_to_journald = false
	I1217 00:49:26.759666 1170766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 00:49:26.759671 1170766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 00:49:26.759676 1170766 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 00:49:26.759681 1170766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 00:49:26.759686 1170766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 00:49:26.759694 1170766 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 00:49:26.759700 1170766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 00:49:26.759704 1170766 command_runner.go:130] > # read_only = false
	I1217 00:49:26.759714 1170766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 00:49:26.759721 1170766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 00:49:26.759725 1170766 command_runner.go:130] > # live configuration reload.
	I1217 00:49:26.759734 1170766 command_runner.go:130] > # log_level = "info"
	I1217 00:49:26.759741 1170766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 00:49:26.759762 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.759770 1170766 command_runner.go:130] > # log_filter = ""
	I1217 00:49:26.759776 1170766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759782 1170766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 00:49:26.759790 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759801 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.759809 1170766 command_runner.go:130] > # uid_mappings = ""
	I1217 00:49:26.759815 1170766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759821 1170766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 00:49:26.759825 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759833 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761229 1170766 command_runner.go:130] > # gid_mappings = ""
	I1217 00:49:26.761253 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 00:49:26.761260 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761266 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761274 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761925 1170766 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 00:49:26.761952 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 00:49:26.761960 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761966 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761974 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.762609 1170766 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 00:49:26.762630 1170766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 00:49:26.762637 1170766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 00:49:26.762643 1170766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 00:49:26.763842 1170766 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 00:49:26.763856 1170766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 00:49:26.763864 1170766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 00:49:26.763869 1170766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 00:49:26.763873 1170766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 00:49:26.763878 1170766 command_runner.go:130] > # drop_infra_ctr = true
	I1217 00:49:26.763885 1170766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 00:49:26.763900 1170766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 00:49:26.763909 1170766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 00:49:26.763919 1170766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 00:49:26.763926 1170766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 00:49:26.763932 1170766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 00:49:26.763938 1170766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 00:49:26.763943 1170766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 00:49:26.763947 1170766 command_runner.go:130] > # shared_cpuset = ""
	I1217 00:49:26.763953 1170766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 00:49:26.763958 1170766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 00:49:26.763963 1170766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 00:49:26.763976 1170766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 00:49:26.763980 1170766 command_runner.go:130] > # pinns_path = ""
	I1217 00:49:26.763986 1170766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 00:49:26.764001 1170766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 00:49:26.764011 1170766 command_runner.go:130] > # enable_criu_support = true
	I1217 00:49:26.764017 1170766 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 00:49:26.764022 1170766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 00:49:26.764027 1170766 command_runner.go:130] > # enable_pod_events = false
	I1217 00:49:26.764033 1170766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 00:49:26.764043 1170766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 00:49:26.764047 1170766 command_runner.go:130] > # default_runtime = "crun"
	I1217 00:49:26.764053 1170766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 00:49:26.764064 1170766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 00:49:26.764077 1170766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 00:49:26.764086 1170766 command_runner.go:130] > # creation as a file is not desired either.
	I1217 00:49:26.764094 1170766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 00:49:26.764101 1170766 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 00:49:26.764105 1170766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 00:49:26.764108 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.764115 1170766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 00:49:26.764124 1170766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 00:49:26.764131 1170766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 00:49:26.764141 1170766 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 00:49:26.764144 1170766 command_runner.go:130] > #
	I1217 00:49:26.764149 1170766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 00:49:26.764154 1170766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 00:49:26.764162 1170766 command_runner.go:130] > # runtime_type = "oci"
	I1217 00:49:26.764167 1170766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 00:49:26.764172 1170766 command_runner.go:130] > # inherit_default_runtime = false
	I1217 00:49:26.764194 1170766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 00:49:26.764203 1170766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 00:49:26.764208 1170766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 00:49:26.764212 1170766 command_runner.go:130] > # monitor_env = []
	I1217 00:49:26.764217 1170766 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 00:49:26.764225 1170766 command_runner.go:130] > # allowed_annotations = []
	I1217 00:49:26.764231 1170766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 00:49:26.764239 1170766 command_runner.go:130] > # no_sync_log = false
	I1217 00:49:26.764246 1170766 command_runner.go:130] > # default_annotations = {}
	I1217 00:49:26.764250 1170766 command_runner.go:130] > # stream_websockets = false
	I1217 00:49:26.764254 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.764304 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.764313 1170766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 00:49:26.764320 1170766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 00:49:26.764331 1170766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 00:49:26.764338 1170766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 00:49:26.764341 1170766 command_runner.go:130] > #   in $PATH.
	I1217 00:49:26.764347 1170766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 00:49:26.764352 1170766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 00:49:26.764359 1170766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 00:49:26.764366 1170766 command_runner.go:130] > #   state.
	I1217 00:49:26.764376 1170766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 00:49:26.764387 1170766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 00:49:26.764393 1170766 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 00:49:26.764400 1170766 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 00:49:26.764409 1170766 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 00:49:26.764454 1170766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 00:49:26.764462 1170766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 00:49:26.764468 1170766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 00:49:26.764475 1170766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 00:49:26.764480 1170766 command_runner.go:130] > #   The currently recognized values are:
	I1217 00:49:26.764486 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 00:49:26.764494 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 00:49:26.764504 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 00:49:26.764515 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 00:49:26.764524 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 00:49:26.764532 1170766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 00:49:26.764539 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 00:49:26.764554 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 00:49:26.764565 1170766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 00:49:26.764575 1170766 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 00:49:26.764586 1170766 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 00:49:26.764592 1170766 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 00:49:26.764599 1170766 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 00:49:26.764605 1170766 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 00:49:26.764611 1170766 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 00:49:26.764620 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 00:49:26.764629 1170766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 00:49:26.764634 1170766 command_runner.go:130] > #   deprecated option "conmon".
	I1217 00:49:26.764642 1170766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 00:49:26.764650 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 00:49:26.764658 1170766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 00:49:26.764668 1170766 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 00:49:26.764675 1170766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 00:49:26.764680 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 00:49:26.764688 1170766 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 00:49:26.764692 1170766 command_runner.go:130] > #   conmon-rs by using:
	I1217 00:49:26.764705 1170766 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 00:49:26.764713 1170766 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 00:49:26.764724 1170766 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 00:49:26.764731 1170766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 00:49:26.764740 1170766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 00:49:26.764747 1170766 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 00:49:26.764755 1170766 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 00:49:26.764760 1170766 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 00:49:26.764769 1170766 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 00:49:26.764778 1170766 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 00:49:26.764783 1170766 command_runner.go:130] > #   when a machine crash happens.
	I1217 00:49:26.764794 1170766 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 00:49:26.764803 1170766 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 00:49:26.764814 1170766 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 00:49:26.764819 1170766 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 00:49:26.764831 1170766 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 00:49:26.764843 1170766 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 00:49:26.764845 1170766 command_runner.go:130] > #
	I1217 00:49:26.764850 1170766 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 00:49:26.764853 1170766 command_runner.go:130] > #
	I1217 00:49:26.764859 1170766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 00:49:26.764870 1170766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 00:49:26.764873 1170766 command_runner.go:130] > #
	I1217 00:49:26.764881 1170766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 00:49:26.764890 1170766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 00:49:26.764894 1170766 command_runner.go:130] > #
	I1217 00:49:26.764900 1170766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 00:49:26.764907 1170766 command_runner.go:130] > # feature.
	I1217 00:49:26.764910 1170766 command_runner.go:130] > #
	I1217 00:49:26.764916 1170766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 00:49:26.764922 1170766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 00:49:26.764928 1170766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 00:49:26.764934 1170766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 00:49:26.764944 1170766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 00:49:26.764947 1170766 command_runner.go:130] > #
	I1217 00:49:26.764953 1170766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 00:49:26.764963 1170766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 00:49:26.764966 1170766 command_runner.go:130] > #
	I1217 00:49:26.764972 1170766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 00:49:26.764981 1170766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 00:49:26.764984 1170766 command_runner.go:130] > #
	I1217 00:49:26.764991 1170766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 00:49:26.764997 1170766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 00:49:26.765000 1170766 command_runner.go:130] > # limitation.
	I1217 00:49:26.765005 1170766 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 00:49:26.765010 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 00:49:26.765015 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765019 1170766 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 00:49:26.765028 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765047 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765056 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765061 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765065 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765069 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765073 1170766 command_runner.go:130] > allowed_annotations = [
	I1217 00:49:26.765077 1170766 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 00:49:26.765080 1170766 command_runner.go:130] > ]
	I1217 00:49:26.765084 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765089 1170766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 00:49:26.765093 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 00:49:26.765096 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765101 1170766 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 00:49:26.765110 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765114 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765119 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765124 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765132 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765136 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765141 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765148 1170766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 00:49:26.765158 1170766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 00:49:26.765165 1170766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 00:49:26.765173 1170766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 00:49:26.765184 1170766 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 00:49:26.765195 1170766 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 00:49:26.765205 1170766 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 00:49:26.765212 1170766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 00:49:26.765226 1170766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 00:49:26.765235 1170766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 00:49:26.765244 1170766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 00:49:26.765251 1170766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 00:49:26.765254 1170766 command_runner.go:130] > # Example:
	I1217 00:49:26.765266 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 00:49:26.765271 1170766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 00:49:26.765283 1170766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 00:49:26.765288 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 00:49:26.765297 1170766 command_runner.go:130] > # cpuset = "0-1"
	I1217 00:49:26.765301 1170766 command_runner.go:130] > # cpushares = "5"
	I1217 00:49:26.765305 1170766 command_runner.go:130] > # cpuquota = "1000"
	I1217 00:49:26.765309 1170766 command_runner.go:130] > # cpuperiod = "100000"
	I1217 00:49:26.765312 1170766 command_runner.go:130] > # cpulimit = "35"
	I1217 00:49:26.765317 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.765321 1170766 command_runner.go:130] > # The workload name is workload-type.
	I1217 00:49:26.765337 1170766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 00:49:26.765342 1170766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 00:49:26.765348 1170766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 00:49:26.765357 1170766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 00:49:26.765362 1170766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 00:49:26.765372 1170766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 00:49:26.765378 1170766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 00:49:26.765388 1170766 command_runner.go:130] > # Default value is set to true
	I1217 00:49:26.765392 1170766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 00:49:26.765399 1170766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 00:49:26.765404 1170766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 00:49:26.765413 1170766 command_runner.go:130] > # Default value is set to 'false'
	I1217 00:49:26.765417 1170766 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 00:49:26.765422 1170766 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 00:49:26.765431 1170766 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 00:49:26.765434 1170766 command_runner.go:130] > # timezone = ""
	I1217 00:49:26.765440 1170766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 00:49:26.765444 1170766 command_runner.go:130] > #
	I1217 00:49:26.765450 1170766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 00:49:26.765460 1170766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 00:49:26.765464 1170766 command_runner.go:130] > [crio.image]
	I1217 00:49:26.765470 1170766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 00:49:26.765481 1170766 command_runner.go:130] > # default_transport = "docker://"
	I1217 00:49:26.765487 1170766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 00:49:26.765498 1170766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765502 1170766 command_runner.go:130] > # global_auth_file = ""
	I1217 00:49:26.765506 1170766 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 00:49:26.765512 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765517 1170766 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.765523 1170766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 00:49:26.765536 1170766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765541 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765550 1170766 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 00:49:26.765556 1170766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 00:49:26.765562 1170766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 00:49:26.765574 1170766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 00:49:26.765580 1170766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 00:49:26.765583 1170766 command_runner.go:130] > # pause_command = "/pause"
	I1217 00:49:26.765589 1170766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 00:49:26.765595 1170766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 00:49:26.765606 1170766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 00:49:26.765612 1170766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 00:49:26.765624 1170766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 00:49:26.765630 1170766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 00:49:26.765638 1170766 command_runner.go:130] > # pinned_images = [
	I1217 00:49:26.765641 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765647 1170766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 00:49:26.765654 1170766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 00:49:26.765667 1170766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 00:49:26.765673 1170766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 00:49:26.765682 1170766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 00:49:26.765687 1170766 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 00:49:26.765692 1170766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 00:49:26.765703 1170766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 00:49:26.765709 1170766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 00:49:26.765722 1170766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 00:49:26.765729 1170766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 00:49:26.765738 1170766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 00:49:26.765749 1170766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 00:49:26.765755 1170766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 00:49:26.765762 1170766 command_runner.go:130] > # changing them here.
	I1217 00:49:26.765771 1170766 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 00:49:26.765775 1170766 command_runner.go:130] > # insecure_registries = [
	I1217 00:49:26.765778 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765785 1170766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 00:49:26.765793 1170766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 00:49:26.765799 1170766 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 00:49:26.765805 1170766 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 00:49:26.765813 1170766 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 00:49:26.765819 1170766 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 00:49:26.765831 1170766 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 00:49:26.765835 1170766 command_runner.go:130] > # auto_reload_registries = false
	I1217 00:49:26.765842 1170766 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 00:49:26.765854 1170766 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 00:49:26.765860 1170766 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 00:49:26.765868 1170766 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 00:49:26.765872 1170766 command_runner.go:130] > # The mode of short name resolution.
	I1217 00:49:26.765879 1170766 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 00:49:26.765891 1170766 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 00:49:26.765899 1170766 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 00:49:26.765908 1170766 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 00:49:26.765914 1170766 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 00:49:26.765920 1170766 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 00:49:26.765924 1170766 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 00:49:26.765930 1170766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 00:49:26.765933 1170766 command_runner.go:130] > # CNI plugins.
	I1217 00:49:26.765942 1170766 command_runner.go:130] > [crio.network]
	I1217 00:49:26.765948 1170766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 00:49:26.765958 1170766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 00:49:26.765965 1170766 command_runner.go:130] > # cni_default_network = ""
	I1217 00:49:26.765972 1170766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 00:49:26.765976 1170766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 00:49:26.765982 1170766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 00:49:26.765989 1170766 command_runner.go:130] > # plugin_dirs = [
	I1217 00:49:26.765992 1170766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 00:49:26.765995 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765999 1170766 command_runner.go:130] > # List of included pod metrics.
	I1217 00:49:26.766003 1170766 command_runner.go:130] > # included_pod_metrics = [
	I1217 00:49:26.766006 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766012 1170766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 00:49:26.766015 1170766 command_runner.go:130] > [crio.metrics]
	I1217 00:49:26.766020 1170766 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 00:49:26.766031 1170766 command_runner.go:130] > # enable_metrics = false
	I1217 00:49:26.766037 1170766 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 00:49:26.766046 1170766 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 00:49:26.766053 1170766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 00:49:26.766061 1170766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 00:49:26.766070 1170766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 00:49:26.766074 1170766 command_runner.go:130] > # metrics_collectors = [
	I1217 00:49:26.766078 1170766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 00:49:26.766083 1170766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 00:49:26.766087 1170766 command_runner.go:130] > # 	"containers_oom_total",
	I1217 00:49:26.766090 1170766 command_runner.go:130] > # 	"processes_defunct",
	I1217 00:49:26.766094 1170766 command_runner.go:130] > # 	"operations_total",
	I1217 00:49:26.766099 1170766 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 00:49:26.766103 1170766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 00:49:26.766107 1170766 command_runner.go:130] > # 	"operations_errors_total",
	I1217 00:49:26.766111 1170766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 00:49:26.766116 1170766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 00:49:26.766120 1170766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 00:49:26.766123 1170766 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 00:49:26.766131 1170766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 00:49:26.766140 1170766 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 00:49:26.766144 1170766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 00:49:26.766149 1170766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 00:49:26.766160 1170766 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 00:49:26.766163 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766169 1170766 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 00:49:26.766173 1170766 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 00:49:26.766178 1170766 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 00:49:26.766182 1170766 command_runner.go:130] > # metrics_port = 9090
	I1217 00:49:26.766187 1170766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 00:49:26.766195 1170766 command_runner.go:130] > # metrics_socket = ""
	I1217 00:49:26.766200 1170766 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 00:49:26.766206 1170766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 00:49:26.766216 1170766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 00:49:26.766221 1170766 command_runner.go:130] > # certificate on any modification event.
	I1217 00:49:26.766224 1170766 command_runner.go:130] > # metrics_cert = ""
	I1217 00:49:26.766230 1170766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 00:49:26.766239 1170766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 00:49:26.766243 1170766 command_runner.go:130] > # metrics_key = ""
	I1217 00:49:26.766249 1170766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 00:49:26.766252 1170766 command_runner.go:130] > [crio.tracing]
	I1217 00:49:26.766257 1170766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 00:49:26.766261 1170766 command_runner.go:130] > # enable_tracing = false
	I1217 00:49:26.766266 1170766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 00:49:26.766270 1170766 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 00:49:26.766277 1170766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 00:49:26.766287 1170766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 00:49:26.766292 1170766 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 00:49:26.766295 1170766 command_runner.go:130] > [crio.nri]
	I1217 00:49:26.766300 1170766 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 00:49:26.766308 1170766 command_runner.go:130] > # enable_nri = true
	I1217 00:49:26.766312 1170766 command_runner.go:130] > # NRI socket to listen on.
	I1217 00:49:26.766320 1170766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 00:49:26.766324 1170766 command_runner.go:130] > # NRI plugin directory to use.
	I1217 00:49:26.766328 1170766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 00:49:26.766333 1170766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 00:49:26.766338 1170766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 00:49:26.766343 1170766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 00:49:26.766396 1170766 command_runner.go:130] > # nri_disable_connections = false
	I1217 00:49:26.766406 1170766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 00:49:26.766411 1170766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 00:49:26.766416 1170766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 00:49:26.766420 1170766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 00:49:26.766425 1170766 command_runner.go:130] > # NRI default validator configuration.
	I1217 00:49:26.766431 1170766 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 00:49:26.766438 1170766 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 00:49:26.766447 1170766 command_runner.go:130] > # can be restricted/rejected:
	I1217 00:49:26.766451 1170766 command_runner.go:130] > # - OCI hook injection
	I1217 00:49:26.766456 1170766 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 00:49:26.766466 1170766 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 00:49:26.766471 1170766 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 00:49:26.766475 1170766 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 00:49:26.766486 1170766 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 00:49:26.766493 1170766 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 00:49:26.766498 1170766 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 00:49:26.766501 1170766 command_runner.go:130] > #
	I1217 00:49:26.766505 1170766 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 00:49:26.766509 1170766 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 00:49:26.766519 1170766 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 00:49:26.766525 1170766 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 00:49:26.766531 1170766 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 00:49:26.766540 1170766 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 00:49:26.766545 1170766 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 00:49:26.766550 1170766 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 00:49:26.766558 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766567 1170766 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 00:49:26.766574 1170766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 00:49:26.766579 1170766 command_runner.go:130] > [crio.stats]
	I1217 00:49:26.766584 1170766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 00:49:26.766590 1170766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 00:49:26.766597 1170766 command_runner.go:130] > # stats_collection_period = 0
	I1217 00:49:26.766603 1170766 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 00:49:26.766610 1170766 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 00:49:26.766618 1170766 command_runner.go:130] > # collection_period = 0
	I1217 00:49:26.769313 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.709999291Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 00:49:26.769335 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710041801Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 00:49:26.769350 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.7100717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 00:49:26.769358 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710096963Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 00:49:26.769367 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710182557Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.769376 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710452795Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 00:49:26.769388 1170766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 00:49:26.769780 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:26.769799 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:26.769817 1170766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:49:26.769847 1170766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:49:26.769980 1170766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:49:26.770057 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:49:26.777246 1170766 command_runner.go:130] > kubeadm
	I1217 00:49:26.777268 1170766 command_runner.go:130] > kubectl
	I1217 00:49:26.777274 1170766 command_runner.go:130] > kubelet
	I1217 00:49:26.778436 1170766 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:49:26.778500 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:49:26.786236 1170766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:49:26.799825 1170766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:49:26.813059 1170766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 00:49:26.828019 1170766 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:49:26.831670 1170766 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:49:26.831993 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.960014 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:27.502236 1170766 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:49:27.502256 1170766 certs.go:195] generating shared ca certs ...
	I1217 00:49:27.502272 1170766 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:27.502407 1170766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:49:27.502457 1170766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:49:27.502465 1170766 certs.go:257] generating profile certs ...
	I1217 00:49:27.502566 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:49:27.502627 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:49:27.502667 1170766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:49:27.502675 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:49:27.502694 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:49:27.502705 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:49:27.502716 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:49:27.502725 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:49:27.502736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:49:27.502746 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:49:27.502759 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:49:27.502805 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:49:27.502840 1170766 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:49:27.502848 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:49:27.502873 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:49:27.502896 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:49:27.502918 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:49:27.502963 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:27.502994 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.503007 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.503017 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.503565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:49:27.523390 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:49:27.542159 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:49:27.560122 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:49:27.578247 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:49:27.596258 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:49:27.613943 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:49:27.632292 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:49:27.650819 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:49:27.669066 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:49:27.687617 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:49:27.705744 1170766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:49:27.719458 1170766 ssh_runner.go:195] Run: openssl version
	I1217 00:49:27.725722 1170766 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:49:27.726120 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.733628 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:49:27.741335 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745236 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745284 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745341 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.786230 1170766 command_runner.go:130] > 51391683
	I1217 00:49:27.786728 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:49:27.794669 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.802040 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:49:27.809799 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813741 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813839 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813906 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.854690 1170766 command_runner.go:130] > 3ec20f2e
	I1217 00:49:27.854778 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:49:27.862235 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.869424 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:49:27.877608 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881295 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881338 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881389 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.921808 1170766 command_runner.go:130] > b5213941
	I1217 00:49:27.922298 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:49:27.929684 1170766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933543 1170766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933568 1170766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:49:27.933576 1170766 command_runner.go:130] > Device: 259,1	Inode: 3648879     Links: 1
	I1217 00:49:27.933583 1170766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:27.933589 1170766 command_runner.go:130] > Access: 2025-12-17 00:45:19.435586201 +0000
	I1217 00:49:27.933595 1170766 command_runner.go:130] > Modify: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933600 1170766 command_runner.go:130] > Change: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933605 1170766 command_runner.go:130] >  Birth: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933682 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:49:27.974244 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:27.974730 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:49:28.015269 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.015758 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:49:28.065826 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.066538 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:49:28.108358 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.108531 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:49:28.149181 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.149647 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:49:28.190353 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.190474 1170766 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:28.190584 1170766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:49:28.190665 1170766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:49:28.221145 1170766 cri.go:89] found id: ""
	I1217 00:49:28.221267 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:49:28.228507 1170766 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:49:28.228597 1170766 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:49:28.228619 1170766 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:49:28.229395 1170766 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:49:28.229438 1170766 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:49:28.229512 1170766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:49:28.236906 1170766 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:49:28.237356 1170766 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389537" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.237502 1170766 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389537" cluster setting kubeconfig missing "functional-389537" context setting]
	I1217 00:49:28.237796 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.238221 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.238396 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.238920 1170766 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:49:28.238939 1170766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:49:28.238945 1170766 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:49:28.238950 1170766 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:49:28.238954 1170766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:49:28.238995 1170766 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:49:28.239224 1170766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:49:28.246965 1170766 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:49:28.247039 1170766 kubeadm.go:602] duration metric: took 17.573937ms to restartPrimaryControlPlane
	I1217 00:49:28.247066 1170766 kubeadm.go:403] duration metric: took 56.597633ms to StartCluster
	I1217 00:49:28.247104 1170766 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.247179 1170766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.247837 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.248043 1170766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:49:28.248489 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:28.248569 1170766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:49:28.248676 1170766 addons.go:70] Setting storage-provisioner=true in profile "functional-389537"
	I1217 00:49:28.248696 1170766 addons.go:239] Setting addon storage-provisioner=true in "functional-389537"
	I1217 00:49:28.248719 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.249218 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.251024 1170766 addons.go:70] Setting default-storageclass=true in profile "functional-389537"
	I1217 00:49:28.251049 1170766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389537"
	I1217 00:49:28.251367 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.254651 1170766 out.go:179] * Verifying Kubernetes components...
	I1217 00:49:28.257533 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:28.287633 1170766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:49:28.290502 1170766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.290526 1170766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:49:28.290609 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.312501 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.312677 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.312998 1170766 addons.go:239] Setting addon default-storageclass=true in "functional-389537"
	I1217 00:49:28.313045 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.313499 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.334272 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.347658 1170766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:28.347681 1170766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:49:28.347742 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.374030 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.486040 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:28.502536 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.510858 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.252938 1170766 node_ready.go:35] waiting up to 6m0s for node "functional-389537" to be "Ready" ...
	I1217 00:49:29.253062 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.253118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.253338 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253370 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253391 1170766 retry.go:31] will retry after 245.662002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253435 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253452 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253459 1170766 retry.go:31] will retry after 276.192706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253512 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.500088 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:29.530677 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.579588 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.579743 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.579792 1170766 retry.go:31] will retry after 478.611243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607395 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.607453 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607473 1170766 retry.go:31] will retry after 213.763614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.753751 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.822424 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.886054 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.886099 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.886150 1170766 retry.go:31] will retry after 580.108639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.059411 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.142412 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.142520 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.142548 1170766 retry.go:31] will retry after 335.340669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.253845 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.254297 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.466582 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:30.478378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.546834 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.546919 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.546953 1170766 retry.go:31] will retry after 1.248601584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557846 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.557940 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557983 1170766 retry.go:31] will retry after 1.081200972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.753182 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.253427 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.253542 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.253954 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:31.639465 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:31.698941 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.698993 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.699013 1170766 retry.go:31] will retry after 1.870151971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.754126 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.754197 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.754530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.795965 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:31.861932 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.861982 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.862003 1170766 retry.go:31] will retry after 1.008225242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.253184 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.253372 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.253717 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.753360 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.871155 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:32.928211 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:32.931741 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.931825 1170766 retry.go:31] will retry after 1.349013392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.253256 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.569378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:33.627393 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:33.631136 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.631170 1170766 retry.go:31] will retry after 1.556307432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.753384 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.753462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.753732 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.753786 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.253674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.281872 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:34.338860 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:34.338952 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.338994 1170766 retry.go:31] will retry after 2.730785051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.753261 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.753705 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.188371 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:35.253305 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.253379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.253659 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:35.253682 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253699 1170766 retry.go:31] will retry after 4.092845301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253755 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:36.253666 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.753252 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.753327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.070065 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:37.127098 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:37.130934 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.130970 1170766 retry.go:31] will retry after 4.776908541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.253166 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.753194 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.753659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.253587 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.253946 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.254001 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.753912 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.753994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.754371 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.254004 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.254408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.346816 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:39.407133 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:39.411576 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.411608 1170766 retry.go:31] will retry after 4.420378296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.753168 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.753277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.753541 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.253304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.753271 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.753349 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.753656 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.753707 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:41.253157 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.253546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.909084 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:41.968890 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:41.968925 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:41.968945 1170766 retry.go:31] will retry after 4.028082996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:42.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.253706 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.753164 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.753238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.753522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.253354 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.253724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:43.253792 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.753558 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.753644 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.753949 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.832189 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:43.890902 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:43.894375 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:43.894408 1170766 retry.go:31] will retry after 8.166287631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:44.253620 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.753652 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.753996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.253708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.254080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:45.254153 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.753590 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.753659 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.753909 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.997293 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:46.061414 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:46.061451 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.061470 1170766 retry.go:31] will retry after 11.083982648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.253886 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.253962 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.254309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.754095 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.754205 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.754534 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.253185 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.253531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.753195 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.753675 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:48.253335 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.253411 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.253779 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.753583 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.753654 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.253646 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.254063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.753928 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.754007 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.754325 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.754377 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.253612 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.253695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.253960 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.753804 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.753885 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.254063 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.254137 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.254480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.753480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.060996 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:52.120691 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:52.124209 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.124248 1170766 retry.go:31] will retry after 5.294346985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.253619 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.254054 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.753693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.753855 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.754194 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.254037 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.254206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.254462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.254510 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.753239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.753523 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.253651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.753370 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.753449 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.753783 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.253341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.753617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.753681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.146315 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:57.205486 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.209162 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.209194 1170766 retry.go:31] will retry after 16.847278069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.253385 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.253754 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.419134 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:57.479419 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.482994 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.483029 1170766 retry.go:31] will retry after 11.356263683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.753493 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.253330 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.253407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.753639 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.753716 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.754093 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.754160 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.753887 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.754215 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.253724 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.253810 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.254155 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.754120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.754206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.754562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:00.754621 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.253240 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.253370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.253698 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.253607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.253193 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.253613 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.753572 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.754045 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.253947 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.254268 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.253850 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.254364 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:05.754125 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.754208 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.754551 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.253164 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.253237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.253346 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.253428 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.253751 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.753540 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.753830 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:07.753881 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.253424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.253762 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.753666 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.753745 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.754125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.840442 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:08.894240 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:08.898223 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:08.898257 1170766 retry.go:31] will retry after 31.216976051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:09.253588 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.253672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.753741 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.754120 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:09.754170 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.253935 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.254009 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.253844 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.254271 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.754088 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.754175 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.754499 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:11.754558 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:12.253187 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.253522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.753227 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.753589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.753701 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.057576 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:14.115415 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:14.119129 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.119165 1170766 retry.go:31] will retry after 28.147339136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.253462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.253544 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.253877 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.253932 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:14.753601 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.753672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.753968 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.253641 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.253732 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.253997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.753777 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.253982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:16.254362 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:16.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.754016 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.253840 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.254281 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.754086 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.754162 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.754503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.253672 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.753651 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.753736 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.754062 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:18.754120 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:19.253943 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.254033 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.254372 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.753082 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.753159 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.753506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.753388 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.753479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.753884 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.253955 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.254007 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:21.753781 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.753865 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.254001 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.254355 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.753077 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.753153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.753404 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.253112 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.253188 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.253528 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.753620 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:23.753996 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.253660 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.253733 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.254004 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.753783 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.753862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.754204 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.253869 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.253944 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.254293 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:25.754034 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:26.253773 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.253845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.753983 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.754381 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.253979 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.753096 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.753176 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.753474 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.253306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:28.753591 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.753916 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.253231 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.753237 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.753688 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.753320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:30.753699 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.253635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.753306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.753379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.753638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.253669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.753350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.753691 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:32.753743 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.253478 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.253794 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.753653 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.754080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.253900 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.254314 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.754008 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:34.754052 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.253945 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.254265 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.753720 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.754034 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.253598 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.753708 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.753783 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.754104 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:36.754165 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:37.253918 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.253995 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.254311 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.753961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.253926 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.254006 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.254296 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.754122 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.754199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.754549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:38.754615 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:39.253269 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.253710 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.753180 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.753267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.753624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.116186 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:40.183350 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:40.183412 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.183435 1170766 retry.go:31] will retry after 25.382750455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.253664 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.254066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.753634 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.753706 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.753966 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.253718 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.253791 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.254134 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:41.254188 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:41.754033 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.754109 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.754488 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.253178 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.253257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.253626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.266982 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:42.344498 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:42.344537 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.344558 1170766 retry.go:31] will retry after 17.409313592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.753120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.753194 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.253776 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.253851 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.753822 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.753901 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.754256 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.253756 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.253922 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.254427 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.753326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:46.753299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.753383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.253287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.753623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.253662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.253705 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:48.753669 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.753752 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.754072 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.253894 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.253970 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.254291 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.753636 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.253843 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.253926 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.254289 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:50.254345 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:50.754111 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.754190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.754553 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.253242 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:52.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.754044 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.754422 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.253188 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.753632 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:54.753689 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.253391 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.253469 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.753512 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.753582 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.753864 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.253390 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:57.753200 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.753283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.753631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.253436 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.253523 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.253931 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.753948 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.754017 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.754272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.254035 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.254118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.254476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.254537 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:59.753199 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.754864 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:59.815839 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815879 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815961 1170766 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:00.253363 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.753302 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.753369 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.753727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:01.753787 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.253347 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.253689 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.753247 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.753324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.753665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.754077 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:03.754136 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.253779 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.254148 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.753646 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.753717 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.753978 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.253862 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.253937 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.254272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.566658 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:51:05.627909 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.627957 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.628043 1170766 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:05.631111 1170766 out.go:179] * Enabled addons: 
	I1217 00:51:05.634718 1170766 addons.go:530] duration metric: took 1m37.386158891s for enable addons: enabled=[]
	I1217 00:51:05.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.753674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.253279 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.253356 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.253651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:06.753202 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.753286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.753613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.253337 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.253416 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.753382 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.753456 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.753719 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.253394 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:08.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:08.753597 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.753675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.754006 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.253704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.753759 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.754219 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.254036 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.254117 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.254443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:10.254499 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:10.753146 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.753222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.753504 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.253431 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.253508 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.253817 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.753238 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:12.753661 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:13.253229 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.753608 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.753242 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.753314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.753606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.253289 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.253371 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.253681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:15.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.753291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.253602 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.253661 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:17.253717 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:17.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.753577 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.253297 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.253364 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.253668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.753853 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.754277 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.254102 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.254185 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.254526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:19.254586 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:19.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.753311 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.753580 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.253722 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.753652 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.253372 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.253701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.753406 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.753495 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:21.753874 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:22.253257 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.753561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.753603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.753685 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.754925 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1217 00:51:23.754986 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:24.253170 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.253267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.253617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.753328 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.753409 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.753746 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.253469 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.253546 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.253880 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.753657 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.753917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.253603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.253711 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.254049 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:26.254102 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:26.753618 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.753694 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.253707 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.753801 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.754135 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.253730 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.253819 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.254157 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:28.254213 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:28.754062 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.754150 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.754428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.253246 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.753316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.753701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.753680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:30.753758 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.253283 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.753617 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.753891 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.253582 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.753873 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.753956 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.754335 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:32.754410 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:33.253082 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.253153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.253408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.753211 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.253332 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.253414 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.253813 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.753517 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.753595 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.753879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.253210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.253725 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:35.753393 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.753476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.753815 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.253180 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.253769 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.253245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.253568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.753118 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.753199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.753448 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:37.753489 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.253352 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.253435 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.253790 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.753633 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.753713 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.754052 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.253630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.253702 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.254026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.754056 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:39.754113 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.253723 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.253798 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.254106 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.754024 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.253834 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.253927 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.254334 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.754152 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.754231 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.754552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:41.754611 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:42.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.753201 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.253361 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.253440 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.753589 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.753665 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.253738 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.253820 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.254118 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.254169 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:44.753956 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.754034 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.754376 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.253875 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.253954 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.254382 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.753128 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.753548 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.253245 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.253330 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.753570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:46.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.253226 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.253657 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.753364 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.753750 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.253392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.753652 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.753737 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.754073 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:48.754130 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.253766 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.253847 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.254210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.753704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.253788 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.253862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.254182 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.753997 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.754076 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.754412 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:50.754497 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:51.253162 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.253230 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.753249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.753596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.753309 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.753387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.753660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.253234 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.253323 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.253702 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:53.253761 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:53.753655 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.753749 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.754112 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.253936 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.753647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.253232 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.253310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.753558 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:55.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:56.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.253610 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.753338 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.253172 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.253533 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.753301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.753667 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:57.753734 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:58.253323 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.753599 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.753674 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.253780 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.253867 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.254242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.754384 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.754441 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:00.261843 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.262054 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.262449 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.753175 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.253252 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.253251 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.253328 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.253683 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:02.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.753677 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.253422 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.753719 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.753793 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:04.254346 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.753969 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.253791 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.253873 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.254220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.753910 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.753984 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.754315 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.253622 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.253718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.254014 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.753813 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.753893 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.754190 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.754244 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:07.254039 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.254467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.753167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.753245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.753517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.253390 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.753749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.753834 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.754171 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.253662 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.253741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.254087 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:09.254142 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:09.753914 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.753986 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.754327 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.254174 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.254257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.254595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.753297 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.753370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.253408 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.253499 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.253838 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.753526 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.753601 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.753894 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:11.753949 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:12.253560 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.253632 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.753805 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.754169 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.253986 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.254075 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.254435 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.754109 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.754186 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.754492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:13.754550 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:14.253219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.253640 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.753663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.253555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.753256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.753575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:16.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.253273 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:16.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:16.753300 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.753651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.253388 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.253700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.753205 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:18.253374 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.253447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:18.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:18.753640 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.754063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.253884 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.253974 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.753695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:20.253726 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.253806 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.254124 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:20.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:20.753974 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.754048 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.754388 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.253158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.253440 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.253238 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.253660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:23.253247 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:23.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.253272 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.253348 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.253619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:25.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.253630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:25.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:25.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.753591 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.253192 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.253269 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.753310 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.753396 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.253223 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.253476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.753141 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.753220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:27.753593 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:28.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.253455 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.253810 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:28.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.753915 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.754546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.253345 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.753321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:29.753656 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:30.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.253247 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.253502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:30.753270 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.753355 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.753724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.753227 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.753572 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:32.253250 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:32.253716 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:32.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.253201 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.253278 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.753973 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:34.253749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.253821 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.254108 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:34.254159 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:34.753679 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.753775 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.253885 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.253959 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.754073 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.754148 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.754487 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.753378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.753774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:36.753831 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:37.253510 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.253591 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.253957 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:37.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.253981 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.254058 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.753581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:39.253264 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:39.253650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:39.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.253402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.253743 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.753429 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.753503 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.753767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:41.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:41.253687 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:41.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.753639 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.253183 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.253286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.253225 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.253637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.753531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:43.753579 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:44.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.253576 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:44.753186 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.753264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.753599 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.253295 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.253735 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.753614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:45.753668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:46.253322 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.253398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:46.753161 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.753496 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.253291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:47.753697 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:48.253324 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:48.753549 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.753624 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.253784 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.753644 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.753723 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.754017 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:49.754065 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:50.253810 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.254239 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:50.753899 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.753975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.754306 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.253987 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.753830 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.753910 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.754242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:51.754311 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:52.254071 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.254149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.254484 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:52.753615 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.753942 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.253691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.254010 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.753953 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.754027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.754345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:53.754402 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.253938 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:54.753755 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.753827 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.754137 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.253941 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.254028 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.254370 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.754085 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.754158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:55.754529 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:56.253088 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.253170 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.253491 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.753629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.253537 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.753221 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:58.253261 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.253670 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:58.253729 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:58.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.754036 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.253988 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.254385 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.753169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:00.255875 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.256036 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.256356 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:00.256590 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:00.753314 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.753406 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.753729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.253451 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.253526 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.253836 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.753275 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.753592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.253224 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.753389 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:02.753739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:03.253378 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.253737 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:03.753753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.753845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.754210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.253955 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.254035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.753974 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:04.754016 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:05.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.254027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:05.753103 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.753190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.753552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.253106 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.253183 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.253481 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.753270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.753579 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:07.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.253288 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.253606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:07.253665 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:07.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.753237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.753615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.253519 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.253592 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.253905 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.754029 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.754407 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:09.253610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.253927 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:09.253968 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:09.753621 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.253736 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.253811 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.254126 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.753682 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.753989 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:11.253783 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:11.254252 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:11.754017 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.754095 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.754418 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.253174 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.253431 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.753169 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.753584 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.253725 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.753680 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.753954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:13.753997 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:14.253722 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.253802 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.254151 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:14.753815 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.753891 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.754223 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.254029 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.753806 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.753888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.754227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:15.754287 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:16.254074 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.254151 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.254498 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:16.753147 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.753225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.753479 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.253581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.753255 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:18.253273 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.253344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.253604 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:18.253646 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:18.753564 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.753634 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.253242 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.753337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:20.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:20.253718 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:20.753425 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.753514 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.753897 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.253583 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.753341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.753692 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.253625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.753263 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.753343 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:23.253236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.253636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:23.753622 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.754022 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.253690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.753690 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.753765 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:24.754119 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:25.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.253969 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.254295 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:25.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.253798 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.253879 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.254195 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.754019 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.754098 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.754443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:26.754501 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.253228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:27.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.753266 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.253426 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.253518 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.253857 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.753672 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.753767 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:29.254090 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.254181 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.254562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:29.254618 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:29.753292 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.753381 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.753726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.253402 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.253471 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.753408 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.753487 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.753850 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.753559 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:31.753600 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:32.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:32.755552 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.755633 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.755956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.253924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.753903 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.753982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.754307 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:33.754366 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:34.254124 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.254211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.254539 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:34.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.753398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:36.253412 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.253489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.253839 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:36.253891 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:36.753198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.753274 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.253727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.753428 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.753500 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.753749 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:38.253689 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.253766 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.254125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:38.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:38.753984 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.754059 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.754410 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.253122 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.253198 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.253459 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.753151 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.753259 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.753585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.253413 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.253767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.753533 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:40.753859 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:41.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.253596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:41.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.753268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.753605 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.253303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:43.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.253419 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:43.254022 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:43.753920 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.754014 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.754333 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.253118 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.253201 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.253526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:45.255002 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.255152 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.255478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:45.255533 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.753317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.253364 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.253796 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.753240 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.753574 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.753402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.753748 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:47.753808 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:48.253313 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:48.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.754069 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.253753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.253830 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.254168 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.753643 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.753731 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.754066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:49.754148 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:50.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.253994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:50.753099 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.753189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.253251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.253515 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:52.253411 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.253511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.253890 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:52.253964 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:52.753645 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.753719 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.253775 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.254202 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.754104 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.754180 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.754506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.253165 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.253239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.253494 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:54.753683 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:55.253358 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.253438 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.253774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:55.753173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.253177 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.253263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.253600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.753404 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:56.753805 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:57.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.253238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.253497 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.253572 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.253908 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.753944 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:58.753983 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:59.253741 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.253823 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.254166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:59.753959 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.754035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.253101 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.253195 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.753249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.753333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:01.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.253476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.253809 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:01.253884 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:01.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.753357 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.253331 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.253412 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.253739 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.753476 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.753557 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.753921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:03.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:03.253961 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:03.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.253243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.753367 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.753380 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.753466 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.753795 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:05.753852 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:06.253248 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:06.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.253321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.753244 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.753502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:08.253274 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.253352 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.253726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:08.253781 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:08.753770 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.753843 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.754162 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.253596 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.253675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.253945 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.753821 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.753904 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.754197 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:10.254043 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.254442 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:10.254495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:10.753142 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.753213 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.753467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.753382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.753753 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.253233 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.753305 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:12.753685 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:13.253376 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.253460 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.253784 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:13.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.753691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.253819 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.253898 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.254259 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.754072 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.754149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.754478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:14.754538 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:15.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.253248 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.253513 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:15.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.253764 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.753262 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:17.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.253350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.253713 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:17.253779 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:17.753480 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.753569 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.253664 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.753923 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.754002 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.754397 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.253225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.753254 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:19.753636 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:20.253334 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:20.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:21.753680 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:22.253524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.254279 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:22.753625 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.753972 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.253809 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.253888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.254196 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.754021 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.754101 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.754439 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:23.754495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:24.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.253250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.253623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:24.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.753607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.253403 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.253757 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.753263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.753530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:26.253258 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.253351 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.253693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:26.253746 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:26.753414 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.753490 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.753826 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.253253 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.753244 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.753673 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:28.253404 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.253479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.253776 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:28.253819 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:28.753595 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.753935 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.253381 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.253465 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.253954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.753737 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.753815 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:30.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:30.253995 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:30.753581 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.753666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.753956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.253745 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.253824 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.254143 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.753606 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.754026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:32.253830 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.254262 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:32.254319 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:32.754091 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.754169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.754555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.253132 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.253222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.753524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.753608 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.753895 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.253698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.753696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.753951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:34.753991 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:35.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.253882 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.254227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:35.754036 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.754112 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.754409 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.253093 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.253164 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.253416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.753271 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:37.253294 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.253378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.253664 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:37.253713 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:37.753375 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.253304 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.253376 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.753592 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.754003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:39.253608 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.253678 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.253933 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:39.253982 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:39.753743 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.753818 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.754166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.253997 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.254396 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.753116 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.253645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.753348 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.753424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.753761 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:41.753817 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:42.265137 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.265218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.265549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:42.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.753653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.253379 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.253788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.753627 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.753708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:44.253832 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.254217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:44.754035 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.754111 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.754446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.253168 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.753612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:46.253316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.253442 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:46.253773 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:46.753434 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.753511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.753766 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.253200 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.253277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.253570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.753267 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.753344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.753625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:48.253552 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.253626 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.253879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:48.253930 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:48.753836 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.753911 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.754217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.254026 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.254106 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.753598 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.753686 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:50.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.254209 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:50.254259 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:50.754039 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.754125 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.253140 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.253209 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.253462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.753185 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.253618 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.753250 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.753598 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:52.753651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:53.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:53.753664 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.753741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.754081 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.253591 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.253669 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.254015 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.753866 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.753946 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.754274 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:54.754329 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:55.254057 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.254131 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.254446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:55.753121 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.753211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.753456 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.253253 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.253557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:57.253260 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:57.253672 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:57.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.753392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.253532 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.253606 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.253910 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:59.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.253799 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.254130 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:59.254190 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:59.753954 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.754031 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.754326 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.260359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.261314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.266189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.753848 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:01.253975 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.254047 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:01.254396 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:01.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.753698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.753979 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.253768 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.253842 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.254146 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.753881 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.754220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.253637 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.253710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.253982 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.753913 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.753997 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.754309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:03.754367 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:04.254115 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.254189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.254536 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:04.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.753161 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.753416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.253139 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.253218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.253585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.753162 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.753568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:06.253362 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.253441 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:06.253744 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.753378 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.753454 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.753700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:08.253293 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.253374 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.253718 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:08.253778 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:08.753537 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.753616 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.253654 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.253730 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.254027 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.753808 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.753883 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.754221 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:10.254047 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.254124 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.254490 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:10.254545 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:10.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.753567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.253638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.753650 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.253202 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.253270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.253527 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:12.753698 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:13.253182 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.253256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.253592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:13.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.753477 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.753394 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.753489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.753829 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:14.753892 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:15.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:15.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.753586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.253207 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.753176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.753503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:17.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:17.253694 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:17.753223 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.253586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.753702 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.753779 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.754110 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:19.253939 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.254018 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.254367 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:19.254421 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:19.754123 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.754196 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.754517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.253190 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.753740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.253432 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.253502 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.253792 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.753215 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.753636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:21.753701 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:22.253195 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.253276 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:22.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.253220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.253589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:24.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.253665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:24.253703 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:24.753359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.753447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.753788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.253481 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.253571 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.253917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.753617 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.753635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:26.753691 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:27.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:27.753345 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.753799 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.253586 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.253666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.253996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.753669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:28.753726 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:29.253176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:29.253235 1170766 node_ready.go:38] duration metric: took 6m0.000252571s for node "functional-389537" to be "Ready" ...
	I1217 00:55:29.256355 1170766 out.go:203] 
	W1217 00:55:29.259198 1170766 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:55:29.259223 1170766 out.go:285] * 
	W1217 00:55:29.261375 1170766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:55:29.264098 1170766 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438930325Z" level=info msg="Using the internal default seccomp profile"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438939252Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438944848Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438951256Z" level=info msg="RDT not available in the host system"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438966747Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.439855023Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.439883519Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.439901061Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.440753933Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.440781723Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.44093034Z" level=info msg="Updated default CNI network name to "
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.441785451Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.44229278Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.442353874Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493497332Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493537101Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493622753Z" level=info msg="Create NRI interface"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493738041Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493751661Z" level=info msg="runtime interface created"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493764321Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493771796Z" level=info msg="runtime interface starting up..."
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493778499Z" level=info msg="starting plugins..."
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493791717Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493858186Z" level=info msg="No systemd watchdog enabled"
	Dec 17 00:49:26 functional-389537 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:31.350424    8710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.351382    8710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.352258    8710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.352984    8710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:31.354663    8710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:37] overlayfs: idmapped layers are currently not supported
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:55:31 up  6:38,  0 user,  load average: 0.02, 0.18, 0.68
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:55:28 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:29 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 17 00:55:29 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:29 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:29 functional-389537 kubelet[8599]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:29 functional-389537 kubelet[8599]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:29 functional-389537 kubelet[8599]: E1217 00:55:29.589328    8599 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:29 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:29 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:30 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 17 00:55:30 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:30 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:30 functional-389537 kubelet[8606]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:30 functional-389537 kubelet[8606]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:30 functional-389537 kubelet[8606]: E1217 00:55:30.311799    8606 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:30 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:30 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:30 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 17 00:55:30 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:30 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:31 functional-389537 kubelet[8634]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:31 functional-389537 kubelet[8634]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:31 functional-389537 kubelet[8634]: E1217 00:55:31.066124    8634 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:31 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:31 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (345.514551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-389537 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-389537 get po -A: exit status 1 (78.156088ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-389537 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-389537 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-389537 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (311.815329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 logs -n 25: (1.045738614s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-099267 image rm kicbase/echo-server:functional-099267 --alsologtostderr                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image save --daemon kicbase/echo-server:functional-099267 --alsologtostderr                                                     │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/test/nested/copy/1136597/hosts                                                                                │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/1136597.pem                                                                                         │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /usr/share/ca-certificates/1136597.pem                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/11365972.pem                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                           │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /usr/share/ca-certificates/11365972.pem                                                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                           │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ update-context │ functional-099267 update-context --alsologtostderr -v=2                                                                                           │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format short --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh            │ functional-099267 ssh pgrep buildkitd                                                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │                     │
	│ image          │ functional-099267 image ls --format yaml --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format json --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls --format table --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image          │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete         │ -p functional-099267                                                                                                                              │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ start          │ -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ start          │ -p functional-389537 --alsologtostderr -v=8                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:49 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:49:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:49:23.461389 1170766 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:49:23.461547 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461559 1170766 out.go:374] Setting ErrFile to fd 2...
	I1217 00:49:23.461579 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461900 1170766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:49:23.462303 1170766 out.go:368] Setting JSON to false
	I1217 00:49:23.463185 1170766 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23514,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:49:23.463289 1170766 start.go:143] virtualization:  
	I1217 00:49:23.466912 1170766 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:49:23.469855 1170766 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:49:23.469995 1170766 notify.go:221] Checking for updates...
	I1217 00:49:23.475916 1170766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:49:23.478779 1170766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:23.481739 1170766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:49:23.484668 1170766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:49:23.487521 1170766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:49:23.490907 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:23.491070 1170766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:49:23.524450 1170766 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:49:23.524610 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.580909 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.571176137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.581015 1170766 docker.go:319] overlay module found
	I1217 00:49:23.585845 1170766 out.go:179] * Using the docker driver based on existing profile
	I1217 00:49:23.588706 1170766 start.go:309] selected driver: docker
	I1217 00:49:23.588726 1170766 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.588842 1170766 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:49:23.588945 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.644593 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.634960306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.645010 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:23.645070 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:23.645127 1170766 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.648351 1170766 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:49:23.651037 1170766 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:49:23.653878 1170766 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:49:23.656858 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:23.656904 1170766 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:49:23.656917 1170766 cache.go:65] Caching tarball of preloaded images
	I1217 00:49:23.656980 1170766 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:49:23.657013 1170766 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:49:23.657024 1170766 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:49:23.657126 1170766 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:49:23.675917 1170766 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:49:23.675939 1170766 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:49:23.675960 1170766 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:49:23.675991 1170766 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:49:23.676062 1170766 start.go:364] duration metric: took 47.228µs to acquireMachinesLock for "functional-389537"
	I1217 00:49:23.676087 1170766 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:49:23.676097 1170766 fix.go:54] fixHost starting: 
	I1217 00:49:23.676360 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:23.693660 1170766 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:49:23.693691 1170766 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:49:23.696944 1170766 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:49:23.696988 1170766 machine.go:94] provisionDockerMachine start ...
	I1217 00:49:23.697095 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.714561 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.714904 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.714921 1170766 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:49:23.856040 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:23.856064 1170766 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:49:23.856128 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.875306 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.875626 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.875637 1170766 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:49:24.024137 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:24.024222 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.043436 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.043770 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.043794 1170766 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:49:24.176920 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:49:24.176960 1170766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:49:24.176987 1170766 ubuntu.go:190] setting up certificates
	I1217 00:49:24.177005 1170766 provision.go:84] configureAuth start
	I1217 00:49:24.177076 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:24.194508 1170766 provision.go:143] copyHostCerts
	I1217 00:49:24.194553 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194603 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:49:24.194616 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194693 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:49:24.194827 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194850 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:49:24.194859 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194890 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:49:24.194946 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.194967 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:49:24.194975 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.195000 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:49:24.195062 1170766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:49:24.401567 1170766 provision.go:177] copyRemoteCerts
	I1217 00:49:24.401643 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:49:24.401688 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.419163 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:24.516584 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:49:24.516654 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:49:24.535526 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:49:24.535590 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:49:24.556116 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:49:24.556181 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:49:24.575533 1170766 provision.go:87] duration metric: took 398.504828ms to configureAuth
	I1217 00:49:24.575561 1170766 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:49:24.575753 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:24.575856 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.593152 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.593467 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.593486 1170766 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:49:24.914611 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:49:24.914655 1170766 machine.go:97] duration metric: took 1.217656857s to provisionDockerMachine
	I1217 00:49:24.914668 1170766 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:49:24.914681 1170766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:49:24.914755 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:49:24.914823 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.935845 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.036750 1170766 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:49:25.040402 1170766 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:49:25.040450 1170766 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:49:25.040457 1170766 command_runner.go:130] > VERSION_ID="12"
	I1217 00:49:25.040461 1170766 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:49:25.040466 1170766 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:49:25.040470 1170766 command_runner.go:130] > ID=debian
	I1217 00:49:25.040475 1170766 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:49:25.040479 1170766 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:49:25.040485 1170766 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:49:25.040531 1170766 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:49:25.040571 1170766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:49:25.040583 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:49:25.040642 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:49:25.040724 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:49:25.040736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 00:49:25.040812 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:49:25.040822 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> /etc/test/nested/copy/1136597/hosts
	I1217 00:49:25.040875 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:49:25.048565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:25.066116 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:49:25.083960 1170766 start.go:296] duration metric: took 169.276161ms for postStartSetup
	I1217 00:49:25.084042 1170766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:49:25.084089 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.101382 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.193085 1170766 command_runner.go:130] > 18%
	I1217 00:49:25.193644 1170766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:49:25.197890 1170766 command_runner.go:130] > 160G
	I1217 00:49:25.198395 1170766 fix.go:56] duration metric: took 1.522293417s for fixHost
	I1217 00:49:25.198422 1170766 start.go:83] releasing machines lock for "functional-389537", held for 1.522344181s
	I1217 00:49:25.198491 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:25.216362 1170766 ssh_runner.go:195] Run: cat /version.json
	I1217 00:49:25.216396 1170766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:49:25.216449 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.216473 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.237434 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.266075 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.438053 1170766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:49:25.438122 1170766 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:49:25.438253 1170766 ssh_runner.go:195] Run: systemctl --version
	I1217 00:49:25.444320 1170766 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:49:25.444367 1170766 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:49:25.444850 1170766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:49:25.480454 1170766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:49:25.484847 1170766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:49:25.484904 1170766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:49:25.484962 1170766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:49:25.493012 1170766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:49:25.493039 1170766 start.go:496] detecting cgroup driver to use...
	I1217 00:49:25.493090 1170766 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:49:25.493156 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:49:25.508569 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:49:25.521635 1170766 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:49:25.521740 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:49:25.537766 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:49:25.551122 1170766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:49:25.669862 1170766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:49:25.789898 1170766 docker.go:234] disabling docker service ...
	I1217 00:49:25.789984 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:49:25.805401 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:49:25.818559 1170766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:49:25.946131 1170766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:49:26.093460 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:49:26.106879 1170766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:49:26.120278 1170766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 00:49:26.121659 1170766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:49:26.121720 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.130856 1170766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:49:26.130968 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.140092 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.149223 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.158222 1170766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:49:26.166662 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.176047 1170766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.184976 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.194179 1170766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:49:26.201960 1170766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:49:26.202030 1170766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:49:26.209746 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.327753 1170766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:49:26.499257 1170766 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:49:26.499380 1170766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:49:26.502956 1170766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 00:49:26.502992 1170766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:49:26.503000 1170766 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1217 00:49:26.503008 1170766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:26.503016 1170766 command_runner.go:130] > Access: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503022 1170766 command_runner.go:130] > Modify: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503035 1170766 command_runner.go:130] > Change: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503041 1170766 command_runner.go:130] >  Birth: -
	I1217 00:49:26.503359 1170766 start.go:564] Will wait 60s for crictl version
	I1217 00:49:26.503439 1170766 ssh_runner.go:195] Run: which crictl
	I1217 00:49:26.507311 1170766 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:49:26.507416 1170766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:49:26.531135 1170766 command_runner.go:130] > Version:  0.1.0
	I1217 00:49:26.531410 1170766 command_runner.go:130] > RuntimeName:  cri-o
	I1217 00:49:26.531606 1170766 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 00:49:26.531797 1170766 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:49:26.534036 1170766 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:49:26.534147 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.559497 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.559533 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.559539 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.559545 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.559550 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.559554 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.559558 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.559563 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.559567 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.559570 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.559574 1170766 command_runner.go:130] >      static
	I1217 00:49:26.559578 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.559582 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.559598 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.559608 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.559612 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.559615 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.559620 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.559632 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.559637 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.561572 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.587741 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.587775 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.587782 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.587787 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.587793 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.587846 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.587858 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.587864 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.587877 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.587887 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.587891 1170766 command_runner.go:130] >      static
	I1217 00:49:26.587894 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.587897 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.587919 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.587929 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.587935 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.587950 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.587961 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.587966 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.587971 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.594651 1170766 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:49:26.597589 1170766 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:49:26.614215 1170766 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:49:26.618047 1170766 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:49:26.618237 1170766 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:49:26.618355 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:26.618425 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.651766 1170766 command_runner.go:130] > {
	I1217 00:49:26.651794 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.651799 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651810 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.651814 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651830 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.651837 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651841 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651850 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.651859 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.651866 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651870 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.651874 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651881 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651884 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651887 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651894 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.651901 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651911 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.651914 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651918 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651926 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.651935 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.651948 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651953 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.651957 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651963 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651970 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651973 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651980 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.651986 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651991 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.651994 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651998 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652006 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.652014 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.652026 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652030 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.652034 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.652038 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652041 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652044 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652051 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.652057 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652062 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.652065 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652069 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652077 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.652087 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.652091 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652095 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.652106 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652118 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652122 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652131 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652135 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652156 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652165 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652183 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.652204 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652210 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.652215 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652219 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652227 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.652238 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.652242 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652246 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.652252 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652256 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652260 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652266 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652271 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652274 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652277 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652284 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.652289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652296 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.652302 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652305 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652313 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.652322 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.652329 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652333 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.652337 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652344 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652350 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652354 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652358 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652361 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652364 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652371 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.652379 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652407 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.652458 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652463 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652470 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.652478 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.652526 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652536 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.652557 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652564 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652567 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652570 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652577 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.652589 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652595 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.652598 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652605 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652615 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.652653 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.652661 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652666 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.652670 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652674 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652677 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652681 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652689 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652696 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652702 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652708 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.652712 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652717 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.652722 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652726 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652734 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.652741 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.652747 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652751 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.652755 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652761 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.652765 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652775 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652779 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.652782 1170766 command_runner.go:130] >     }
	I1217 00:49:26.652785 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.652790 1170766 command_runner.go:130] > }
	I1217 00:49:26.655303 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.655332 1170766 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:49:26.655388 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.678896 1170766 command_runner.go:130] > {
	I1217 00:49:26.678916 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.678921 1170766 command_runner.go:130] >     {
	I1217 00:49:26.678929 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.678933 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.678939 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.678942 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678946 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.678958 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.678968 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.678972 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678976 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.678980 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.678990 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679002 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679020 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679027 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.679030 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679036 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.679039 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679043 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679056 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.679065 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.679071 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679075 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.679079 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679091 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679098 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679101 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679107 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.679111 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679119 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.679122 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679127 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679135 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.679146 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.679149 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679153 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.679160 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.679164 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679169 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679172 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679179 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.679185 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679190 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.679194 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679199 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679215 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.679225 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.679228 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679233 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.679239 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679243 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679249 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679257 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679264 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679268 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679271 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679277 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.679289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679294 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.679297 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679301 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679309 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.679317 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.679328 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679333 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.679336 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679340 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679344 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679351 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679355 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679365 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679368 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679375 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.679378 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679387 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.679390 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679394 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679405 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.679419 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.679423 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679427 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.679438 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679442 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679445 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679449 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679455 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679459 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679462 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679471 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.679476 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679481 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.679486 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679491 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679501 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.679517 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.679521 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679525 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.679529 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679535 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679543 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679549 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679555 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.679560 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679568 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.679574 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679577 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679586 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.679605 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.679612 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679616 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.679619 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679626 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679629 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679633 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679637 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679640 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679643 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679649 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.679655 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679660 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.679672 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679676 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679683 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.679691 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.679698 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679703 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.679706 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679710 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.679713 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679717 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679721 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.679727 1170766 command_runner.go:130] >     }
	I1217 00:49:26.679730 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.679735 1170766 command_runner.go:130] > }
	I1217 00:49:26.682128 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.682152 1170766 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:49:26.682160 1170766 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:49:26.682270 1170766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:49:26.682351 1170766 ssh_runner.go:195] Run: crio config
	I1217 00:49:26.731730 1170766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 00:49:26.731754 1170766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 00:49:26.731761 1170766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 00:49:26.731764 1170766 command_runner.go:130] > #
	I1217 00:49:26.731771 1170766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 00:49:26.731778 1170766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 00:49:26.731784 1170766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 00:49:26.731801 1170766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 00:49:26.731808 1170766 command_runner.go:130] > # reload'.
	I1217 00:49:26.731815 1170766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 00:49:26.731836 1170766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 00:49:26.731843 1170766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 00:49:26.731849 1170766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 00:49:26.731853 1170766 command_runner.go:130] > [crio]
	I1217 00:49:26.731859 1170766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 00:49:26.731866 1170766 command_runner.go:130] > # containers images, in this directory.
	I1217 00:49:26.732568 1170766 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 00:49:26.732592 1170766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 00:49:26.733157 1170766 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 00:49:26.733176 1170766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 00:49:26.733597 1170766 command_runner.go:130] > # imagestore = ""
	I1217 00:49:26.733614 1170766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 00:49:26.733623 1170766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 00:49:26.734179 1170766 command_runner.go:130] > # storage_driver = "overlay"
	I1217 00:49:26.734196 1170766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 00:49:26.734204 1170766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 00:49:26.734478 1170766 command_runner.go:130] > # storage_option = [
	I1217 00:49:26.734782 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.734798 1170766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 00:49:26.734807 1170766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 00:49:26.735378 1170766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 00:49:26.735394 1170766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 00:49:26.735411 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 00:49:26.735422 1170766 command_runner.go:130] > # always happen on a node reboot
	I1217 00:49:26.735984 1170766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 00:49:26.736023 1170766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 00:49:26.736036 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 00:49:26.736041 1170766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 00:49:26.736536 1170766 command_runner.go:130] > # version_file_persist = ""
	I1217 00:49:26.736561 1170766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 00:49:26.736570 1170766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 00:49:26.737150 1170766 command_runner.go:130] > # internal_wipe = true
	I1217 00:49:26.737173 1170766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 00:49:26.737180 1170766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 00:49:26.737739 1170766 command_runner.go:130] > # internal_repair = true
	I1217 00:49:26.737758 1170766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 00:49:26.737766 1170766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 00:49:26.737772 1170766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 00:49:26.738332 1170766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 00:49:26.738352 1170766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 00:49:26.738356 1170766 command_runner.go:130] > [crio.api]
	I1217 00:49:26.738361 1170766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 00:49:26.738921 1170766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 00:49:26.738940 1170766 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 00:49:26.739496 1170766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 00:49:26.739517 1170766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 00:49:26.739523 1170766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 00:49:26.740074 1170766 command_runner.go:130] > # stream_port = "0"
	I1217 00:49:26.740093 1170766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 00:49:26.740679 1170766 command_runner.go:130] > # stream_enable_tls = false
	I1217 00:49:26.740700 1170766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 00:49:26.741116 1170766 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 00:49:26.741133 1170766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 00:49:26.741147 1170766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 00:49:26.741613 1170766 command_runner.go:130] > # stream_tls_cert = ""
	I1217 00:49:26.741629 1170766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 00:49:26.741636 1170766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 00:49:26.742076 1170766 command_runner.go:130] > # stream_tls_key = ""
	I1217 00:49:26.742092 1170766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 00:49:26.742107 1170766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 00:49:26.742117 1170766 command_runner.go:130] > # automatically pick up the changes.
	I1217 00:49:26.742632 1170766 command_runner.go:130] > # stream_tls_ca = ""
	I1217 00:49:26.742675 1170766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743308 1170766 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 00:49:26.743331 1170766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743950 1170766 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 00:49:26.743971 1170766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 00:49:26.743978 1170766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 00:49:26.743981 1170766 command_runner.go:130] > [crio.runtime]
	I1217 00:49:26.743988 1170766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 00:49:26.743996 1170766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 00:49:26.744007 1170766 command_runner.go:130] > # "nofile=1024:2048"
	I1217 00:49:26.744021 1170766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 00:49:26.744329 1170766 command_runner.go:130] > # default_ulimits = [
	I1217 00:49:26.744680 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.744702 1170766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 00:49:26.745338 1170766 command_runner.go:130] > # no_pivot = false
	I1217 00:49:26.745359 1170766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 00:49:26.745367 1170766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 00:49:26.745979 1170766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 00:49:26.746000 1170766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 00:49:26.746006 1170766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 00:49:26.746013 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.746484 1170766 command_runner.go:130] > # conmon = ""
	I1217 00:49:26.746503 1170766 command_runner.go:130] > # Cgroup setting for conmon
	I1217 00:49:26.746512 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 00:49:26.746837 1170766 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 00:49:26.746859 1170766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 00:49:26.746866 1170766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 00:49:26.746875 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.747181 1170766 command_runner.go:130] > # conmon_env = [
	I1217 00:49:26.747508 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.747529 1170766 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 00:49:26.747536 1170766 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 00:49:26.747545 1170766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 00:49:26.747848 1170766 command_runner.go:130] > # default_env = [
	I1217 00:49:26.748185 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.748200 1170766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 00:49:26.748210 1170766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 00:49:26.750925 1170766 command_runner.go:130] > # selinux = false
	I1217 00:49:26.750948 1170766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 00:49:26.750958 1170766 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 00:49:26.750964 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.751661 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.751677 1170766 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 00:49:26.751683 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752150 1170766 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 00:49:26.752167 1170766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 00:49:26.752181 1170766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 00:49:26.752191 1170766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 00:49:26.752216 1170766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 00:49:26.752224 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752873 1170766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 00:49:26.752894 1170766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 00:49:26.752932 1170766 command_runner.go:130] > # the cgroup blockio controller.
	I1217 00:49:26.753417 1170766 command_runner.go:130] > # blockio_config_file = ""
	I1217 00:49:26.753438 1170766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 00:49:26.753444 1170766 command_runner.go:130] > # blockio parameters.
	I1217 00:49:26.754055 1170766 command_runner.go:130] > # blockio_reload = false
	I1217 00:49:26.754079 1170766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 00:49:26.754084 1170766 command_runner.go:130] > # irqbalance daemon.
	I1217 00:49:26.754673 1170766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 00:49:26.754692 1170766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 00:49:26.754700 1170766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 00:49:26.754708 1170766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 00:49:26.755498 1170766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 00:49:26.755515 1170766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 00:49:26.755521 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.756018 1170766 command_runner.go:130] > # rdt_config_file = ""
	I1217 00:49:26.756034 1170766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 00:49:26.756360 1170766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 00:49:26.756381 1170766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 00:49:26.756895 1170766 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 00:49:26.756917 1170766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 00:49:26.756925 1170766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 00:49:26.756935 1170766 command_runner.go:130] > # will be added.
	I1217 00:49:26.757272 1170766 command_runner.go:130] > # default_capabilities = [
	I1217 00:49:26.757675 1170766 command_runner.go:130] > # 	"CHOWN",
	I1217 00:49:26.758010 1170766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 00:49:26.758348 1170766 command_runner.go:130] > # 	"FSETID",
	I1217 00:49:26.758682 1170766 command_runner.go:130] > # 	"FOWNER",
	I1217 00:49:26.759200 1170766 command_runner.go:130] > # 	"SETGID",
	I1217 00:49:26.759214 1170766 command_runner.go:130] > # 	"SETUID",
	I1217 00:49:26.759238 1170766 command_runner.go:130] > # 	"SETPCAP",
	I1217 00:49:26.759246 1170766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 00:49:26.759249 1170766 command_runner.go:130] > # 	"KILL",
	I1217 00:49:26.759253 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759261 1170766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 00:49:26.759273 1170766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 00:49:26.759278 1170766 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 00:49:26.759290 1170766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 00:49:26.759297 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759305 1170766 command_runner.go:130] > default_sysctls = [
	I1217 00:49:26.759310 1170766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 00:49:26.759312 1170766 command_runner.go:130] > ]
	I1217 00:49:26.759317 1170766 command_runner.go:130] > # List of devices on the host that a
	I1217 00:49:26.759323 1170766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 00:49:26.759327 1170766 command_runner.go:130] > # allowed_devices = [
	I1217 00:49:26.759331 1170766 command_runner.go:130] > # 	"/dev/fuse",
	I1217 00:49:26.759338 1170766 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 00:49:26.759341 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759347 1170766 command_runner.go:130] > # List of additional devices. specified as
	I1217 00:49:26.759358 1170766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 00:49:26.759363 1170766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 00:49:26.759373 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759377 1170766 command_runner.go:130] > # additional_devices = [
	I1217 00:49:26.759380 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759386 1170766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 00:49:26.759396 1170766 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 00:49:26.759406 1170766 command_runner.go:130] > # 	"/etc/cdi",
	I1217 00:49:26.759411 1170766 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 00:49:26.759414 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759421 1170766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 00:49:26.759446 1170766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 00:49:26.759454 1170766 command_runner.go:130] > # Defaults to false.
	I1217 00:49:26.759459 1170766 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 00:49:26.759466 1170766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 00:49:26.759476 1170766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 00:49:26.759480 1170766 command_runner.go:130] > # hooks_dir = [
	I1217 00:49:26.759486 1170766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 00:49:26.759490 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759496 1170766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 00:49:26.759505 1170766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 00:49:26.759511 1170766 command_runner.go:130] > # its default mounts from the following two files:
	I1217 00:49:26.759515 1170766 command_runner.go:130] > #
	I1217 00:49:26.759522 1170766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 00:49:26.759532 1170766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 00:49:26.759537 1170766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 00:49:26.759540 1170766 command_runner.go:130] > #
	I1217 00:49:26.759546 1170766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 00:49:26.759556 1170766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 00:49:26.759563 1170766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 00:49:26.759569 1170766 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 00:49:26.759578 1170766 command_runner.go:130] > #
	I1217 00:49:26.759582 1170766 command_runner.go:130] > # default_mounts_file = ""
	I1217 00:49:26.759588 1170766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 00:49:26.759595 1170766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 00:49:26.759599 1170766 command_runner.go:130] > # pids_limit = -1
	I1217 00:49:26.759609 1170766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 00:49:26.759619 1170766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 00:49:26.759625 1170766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 00:49:26.759634 1170766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 00:49:26.759644 1170766 command_runner.go:130] > # log_size_max = -1
	I1217 00:49:26.759653 1170766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 00:49:26.759660 1170766 command_runner.go:130] > # log_to_journald = false
	I1217 00:49:26.759666 1170766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 00:49:26.759671 1170766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 00:49:26.759676 1170766 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 00:49:26.759681 1170766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 00:49:26.759686 1170766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 00:49:26.759694 1170766 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 00:49:26.759700 1170766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 00:49:26.759704 1170766 command_runner.go:130] > # read_only = false
	I1217 00:49:26.759714 1170766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 00:49:26.759721 1170766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 00:49:26.759725 1170766 command_runner.go:130] > # live configuration reload.
	I1217 00:49:26.759734 1170766 command_runner.go:130] > # log_level = "info"
	I1217 00:49:26.759741 1170766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 00:49:26.759762 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.759770 1170766 command_runner.go:130] > # log_filter = ""
	I1217 00:49:26.759776 1170766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759782 1170766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 00:49:26.759790 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759801 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.759809 1170766 command_runner.go:130] > # uid_mappings = ""
	I1217 00:49:26.759815 1170766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759821 1170766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 00:49:26.759825 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759833 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761229 1170766 command_runner.go:130] > # gid_mappings = ""
	I1217 00:49:26.761253 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 00:49:26.761260 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761266 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761274 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761925 1170766 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 00:49:26.761952 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 00:49:26.761960 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761966 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761974 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.762609 1170766 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 00:49:26.762630 1170766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 00:49:26.762637 1170766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 00:49:26.762643 1170766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 00:49:26.763842 1170766 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 00:49:26.763856 1170766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 00:49:26.763864 1170766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 00:49:26.763869 1170766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 00:49:26.763873 1170766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 00:49:26.763878 1170766 command_runner.go:130] > # drop_infra_ctr = true
	I1217 00:49:26.763885 1170766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 00:49:26.763900 1170766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 00:49:26.763909 1170766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 00:49:26.763919 1170766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 00:49:26.763926 1170766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 00:49:26.763932 1170766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 00:49:26.763938 1170766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 00:49:26.763943 1170766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 00:49:26.763947 1170766 command_runner.go:130] > # shared_cpuset = ""
	I1217 00:49:26.763953 1170766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 00:49:26.763958 1170766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 00:49:26.763963 1170766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 00:49:26.763976 1170766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 00:49:26.763980 1170766 command_runner.go:130] > # pinns_path = ""
	I1217 00:49:26.763986 1170766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 00:49:26.764001 1170766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 00:49:26.764011 1170766 command_runner.go:130] > # enable_criu_support = true
	I1217 00:49:26.764017 1170766 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 00:49:26.764022 1170766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 00:49:26.764027 1170766 command_runner.go:130] > # enable_pod_events = false
	I1217 00:49:26.764033 1170766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 00:49:26.764043 1170766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 00:49:26.764047 1170766 command_runner.go:130] > # default_runtime = "crun"
	I1217 00:49:26.764053 1170766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 00:49:26.764064 1170766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 00:49:26.764077 1170766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 00:49:26.764086 1170766 command_runner.go:130] > # creation as a file is not desired either.
	I1217 00:49:26.764094 1170766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 00:49:26.764101 1170766 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 00:49:26.764105 1170766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 00:49:26.764108 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.764115 1170766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 00:49:26.764124 1170766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 00:49:26.764131 1170766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 00:49:26.764141 1170766 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 00:49:26.764144 1170766 command_runner.go:130] > #
	I1217 00:49:26.764149 1170766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 00:49:26.764154 1170766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 00:49:26.764162 1170766 command_runner.go:130] > # runtime_type = "oci"
	I1217 00:49:26.764167 1170766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 00:49:26.764172 1170766 command_runner.go:130] > # inherit_default_runtime = false
	I1217 00:49:26.764194 1170766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 00:49:26.764203 1170766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 00:49:26.764208 1170766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 00:49:26.764212 1170766 command_runner.go:130] > # monitor_env = []
	I1217 00:49:26.764217 1170766 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 00:49:26.764225 1170766 command_runner.go:130] > # allowed_annotations = []
	I1217 00:49:26.764231 1170766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 00:49:26.764239 1170766 command_runner.go:130] > # no_sync_log = false
	I1217 00:49:26.764246 1170766 command_runner.go:130] > # default_annotations = {}
	I1217 00:49:26.764250 1170766 command_runner.go:130] > # stream_websockets = false
	I1217 00:49:26.764254 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.764304 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.764313 1170766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 00:49:26.764320 1170766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 00:49:26.764331 1170766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 00:49:26.764338 1170766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 00:49:26.764341 1170766 command_runner.go:130] > #   in $PATH.
	I1217 00:49:26.764347 1170766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 00:49:26.764352 1170766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 00:49:26.764359 1170766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 00:49:26.764366 1170766 command_runner.go:130] > #   state.
	I1217 00:49:26.764376 1170766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 00:49:26.764387 1170766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 00:49:26.764393 1170766 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 00:49:26.764400 1170766 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 00:49:26.764409 1170766 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 00:49:26.764454 1170766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 00:49:26.764462 1170766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 00:49:26.764468 1170766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 00:49:26.764475 1170766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 00:49:26.764480 1170766 command_runner.go:130] > #   The currently recognized values are:
	I1217 00:49:26.764486 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 00:49:26.764494 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 00:49:26.764504 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 00:49:26.764515 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 00:49:26.764524 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 00:49:26.764532 1170766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 00:49:26.764539 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 00:49:26.764554 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 00:49:26.764565 1170766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 00:49:26.764575 1170766 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 00:49:26.764586 1170766 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 00:49:26.764592 1170766 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 00:49:26.764599 1170766 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 00:49:26.764605 1170766 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 00:49:26.764611 1170766 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 00:49:26.764620 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 00:49:26.764629 1170766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 00:49:26.764634 1170766 command_runner.go:130] > #   deprecated option "conmon".
	I1217 00:49:26.764642 1170766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 00:49:26.764650 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 00:49:26.764658 1170766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 00:49:26.764668 1170766 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 00:49:26.764675 1170766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 00:49:26.764680 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 00:49:26.764688 1170766 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 00:49:26.764692 1170766 command_runner.go:130] > #   conmon-rs by using:
	I1217 00:49:26.764705 1170766 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 00:49:26.764713 1170766 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 00:49:26.764724 1170766 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 00:49:26.764731 1170766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 00:49:26.764740 1170766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 00:49:26.764747 1170766 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 00:49:26.764755 1170766 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 00:49:26.764760 1170766 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 00:49:26.764769 1170766 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 00:49:26.764778 1170766 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 00:49:26.764783 1170766 command_runner.go:130] > #   when a machine crash happens.
	I1217 00:49:26.764794 1170766 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 00:49:26.764803 1170766 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 00:49:26.764814 1170766 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 00:49:26.764819 1170766 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 00:49:26.764831 1170766 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 00:49:26.764843 1170766 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 00:49:26.764845 1170766 command_runner.go:130] > #
	I1217 00:49:26.764850 1170766 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 00:49:26.764853 1170766 command_runner.go:130] > #
	I1217 00:49:26.764859 1170766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 00:49:26.764870 1170766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 00:49:26.764873 1170766 command_runner.go:130] > #
	I1217 00:49:26.764881 1170766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 00:49:26.764890 1170766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 00:49:26.764894 1170766 command_runner.go:130] > #
	I1217 00:49:26.764900 1170766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 00:49:26.764907 1170766 command_runner.go:130] > # feature.
	I1217 00:49:26.764910 1170766 command_runner.go:130] > #
	I1217 00:49:26.764916 1170766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 00:49:26.764922 1170766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 00:49:26.764928 1170766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 00:49:26.764934 1170766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 00:49:26.764944 1170766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 00:49:26.764947 1170766 command_runner.go:130] > #
	I1217 00:49:26.764953 1170766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 00:49:26.764963 1170766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 00:49:26.764966 1170766 command_runner.go:130] > #
	I1217 00:49:26.764972 1170766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 00:49:26.764981 1170766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 00:49:26.764984 1170766 command_runner.go:130] > #
	I1217 00:49:26.764991 1170766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 00:49:26.764997 1170766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 00:49:26.765000 1170766 command_runner.go:130] > # limitation.
	I1217 00:49:26.765005 1170766 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 00:49:26.765010 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 00:49:26.765015 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765019 1170766 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 00:49:26.765028 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765047 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765056 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765061 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765065 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765069 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765073 1170766 command_runner.go:130] > allowed_annotations = [
	I1217 00:49:26.765077 1170766 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 00:49:26.765080 1170766 command_runner.go:130] > ]
	I1217 00:49:26.765084 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765089 1170766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 00:49:26.765093 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 00:49:26.765096 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765101 1170766 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 00:49:26.765110 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765114 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765119 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765124 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765132 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765136 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765141 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765148 1170766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 00:49:26.765158 1170766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 00:49:26.765165 1170766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 00:49:26.765173 1170766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 00:49:26.765184 1170766 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 00:49:26.765195 1170766 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 00:49:26.765205 1170766 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 00:49:26.765212 1170766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 00:49:26.765226 1170766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 00:49:26.765235 1170766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 00:49:26.765244 1170766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 00:49:26.765251 1170766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 00:49:26.765254 1170766 command_runner.go:130] > # Example:
	I1217 00:49:26.765266 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 00:49:26.765271 1170766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 00:49:26.765283 1170766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 00:49:26.765288 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 00:49:26.765297 1170766 command_runner.go:130] > # cpuset = "0-1"
	I1217 00:49:26.765301 1170766 command_runner.go:130] > # cpushares = "5"
	I1217 00:49:26.765305 1170766 command_runner.go:130] > # cpuquota = "1000"
	I1217 00:49:26.765309 1170766 command_runner.go:130] > # cpuperiod = "100000"
	I1217 00:49:26.765312 1170766 command_runner.go:130] > # cpulimit = "35"
	I1217 00:49:26.765317 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.765321 1170766 command_runner.go:130] > # The workload name is workload-type.
	I1217 00:49:26.765337 1170766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 00:49:26.765342 1170766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 00:49:26.765348 1170766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 00:49:26.765357 1170766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 00:49:26.765362 1170766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 00:49:26.765372 1170766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 00:49:26.765378 1170766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 00:49:26.765388 1170766 command_runner.go:130] > # Default value is set to true
	I1217 00:49:26.765392 1170766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 00:49:26.765399 1170766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 00:49:26.765404 1170766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 00:49:26.765413 1170766 command_runner.go:130] > # Default value is set to 'false'
	I1217 00:49:26.765417 1170766 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 00:49:26.765422 1170766 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 00:49:26.765431 1170766 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 00:49:26.765434 1170766 command_runner.go:130] > # timezone = ""
	I1217 00:49:26.765440 1170766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 00:49:26.765444 1170766 command_runner.go:130] > #
	I1217 00:49:26.765450 1170766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 00:49:26.765460 1170766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 00:49:26.765464 1170766 command_runner.go:130] > [crio.image]
	I1217 00:49:26.765470 1170766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 00:49:26.765481 1170766 command_runner.go:130] > # default_transport = "docker://"
	I1217 00:49:26.765487 1170766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 00:49:26.765498 1170766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765502 1170766 command_runner.go:130] > # global_auth_file = ""
	I1217 00:49:26.765506 1170766 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 00:49:26.765512 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765517 1170766 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.765523 1170766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 00:49:26.765536 1170766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765541 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765550 1170766 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 00:49:26.765556 1170766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 00:49:26.765562 1170766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 00:49:26.765574 1170766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 00:49:26.765580 1170766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 00:49:26.765583 1170766 command_runner.go:130] > # pause_command = "/pause"
	I1217 00:49:26.765589 1170766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 00:49:26.765595 1170766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 00:49:26.765606 1170766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 00:49:26.765612 1170766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 00:49:26.765624 1170766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 00:49:26.765630 1170766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 00:49:26.765638 1170766 command_runner.go:130] > # pinned_images = [
	I1217 00:49:26.765641 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765647 1170766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 00:49:26.765654 1170766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 00:49:26.765667 1170766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 00:49:26.765673 1170766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 00:49:26.765682 1170766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 00:49:26.765687 1170766 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 00:49:26.765692 1170766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 00:49:26.765703 1170766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 00:49:26.765709 1170766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 00:49:26.765722 1170766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 00:49:26.765729 1170766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 00:49:26.765738 1170766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 00:49:26.765749 1170766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 00:49:26.765755 1170766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 00:49:26.765762 1170766 command_runner.go:130] > # changing them here.
	I1217 00:49:26.765771 1170766 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 00:49:26.765775 1170766 command_runner.go:130] > # insecure_registries = [
	I1217 00:49:26.765778 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765785 1170766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 00:49:26.765793 1170766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 00:49:26.765799 1170766 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 00:49:26.765805 1170766 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 00:49:26.765813 1170766 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 00:49:26.765819 1170766 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 00:49:26.765831 1170766 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 00:49:26.765835 1170766 command_runner.go:130] > # auto_reload_registries = false
	I1217 00:49:26.765842 1170766 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 00:49:26.765854 1170766 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 00:49:26.765860 1170766 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 00:49:26.765868 1170766 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 00:49:26.765872 1170766 command_runner.go:130] > # The mode of short name resolution.
	I1217 00:49:26.765879 1170766 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 00:49:26.765891 1170766 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 00:49:26.765899 1170766 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 00:49:26.765908 1170766 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 00:49:26.765914 1170766 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 00:49:26.765920 1170766 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 00:49:26.765924 1170766 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 00:49:26.765930 1170766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 00:49:26.765933 1170766 command_runner.go:130] > # CNI plugins.
	I1217 00:49:26.765942 1170766 command_runner.go:130] > [crio.network]
	I1217 00:49:26.765948 1170766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 00:49:26.765958 1170766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 00:49:26.765965 1170766 command_runner.go:130] > # cni_default_network = ""
	I1217 00:49:26.765972 1170766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 00:49:26.765976 1170766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 00:49:26.765982 1170766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 00:49:26.765989 1170766 command_runner.go:130] > # plugin_dirs = [
	I1217 00:49:26.765992 1170766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 00:49:26.765995 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765999 1170766 command_runner.go:130] > # List of included pod metrics.
	I1217 00:49:26.766003 1170766 command_runner.go:130] > # included_pod_metrics = [
	I1217 00:49:26.766006 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766012 1170766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 00:49:26.766015 1170766 command_runner.go:130] > [crio.metrics]
	I1217 00:49:26.766020 1170766 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 00:49:26.766031 1170766 command_runner.go:130] > # enable_metrics = false
	I1217 00:49:26.766037 1170766 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 00:49:26.766046 1170766 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 00:49:26.766053 1170766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 00:49:26.766061 1170766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 00:49:26.766070 1170766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 00:49:26.766074 1170766 command_runner.go:130] > # metrics_collectors = [
	I1217 00:49:26.766078 1170766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 00:49:26.766083 1170766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 00:49:26.766087 1170766 command_runner.go:130] > # 	"containers_oom_total",
	I1217 00:49:26.766090 1170766 command_runner.go:130] > # 	"processes_defunct",
	I1217 00:49:26.766094 1170766 command_runner.go:130] > # 	"operations_total",
	I1217 00:49:26.766099 1170766 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 00:49:26.766103 1170766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 00:49:26.766107 1170766 command_runner.go:130] > # 	"operations_errors_total",
	I1217 00:49:26.766111 1170766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 00:49:26.766116 1170766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 00:49:26.766120 1170766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 00:49:26.766123 1170766 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 00:49:26.766131 1170766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 00:49:26.766140 1170766 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 00:49:26.766144 1170766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 00:49:26.766149 1170766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 00:49:26.766160 1170766 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 00:49:26.766163 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766169 1170766 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 00:49:26.766173 1170766 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 00:49:26.766178 1170766 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 00:49:26.766182 1170766 command_runner.go:130] > # metrics_port = 9090
	I1217 00:49:26.766187 1170766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 00:49:26.766195 1170766 command_runner.go:130] > # metrics_socket = ""
	I1217 00:49:26.766200 1170766 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 00:49:26.766206 1170766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 00:49:26.766216 1170766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 00:49:26.766221 1170766 command_runner.go:130] > # certificate on any modification event.
	I1217 00:49:26.766224 1170766 command_runner.go:130] > # metrics_cert = ""
	I1217 00:49:26.766230 1170766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 00:49:26.766239 1170766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 00:49:26.766243 1170766 command_runner.go:130] > # metrics_key = ""
	I1217 00:49:26.766249 1170766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 00:49:26.766252 1170766 command_runner.go:130] > [crio.tracing]
	I1217 00:49:26.766257 1170766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 00:49:26.766261 1170766 command_runner.go:130] > # enable_tracing = false
	I1217 00:49:26.766266 1170766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 00:49:26.766270 1170766 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 00:49:26.766277 1170766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 00:49:26.766287 1170766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 00:49:26.766292 1170766 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 00:49:26.766295 1170766 command_runner.go:130] > [crio.nri]
	I1217 00:49:26.766300 1170766 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 00:49:26.766308 1170766 command_runner.go:130] > # enable_nri = true
	I1217 00:49:26.766312 1170766 command_runner.go:130] > # NRI socket to listen on.
	I1217 00:49:26.766320 1170766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 00:49:26.766324 1170766 command_runner.go:130] > # NRI plugin directory to use.
	I1217 00:49:26.766328 1170766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 00:49:26.766333 1170766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 00:49:26.766338 1170766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 00:49:26.766343 1170766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 00:49:26.766396 1170766 command_runner.go:130] > # nri_disable_connections = false
	I1217 00:49:26.766406 1170766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 00:49:26.766411 1170766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 00:49:26.766416 1170766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 00:49:26.766420 1170766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 00:49:26.766425 1170766 command_runner.go:130] > # NRI default validator configuration.
	I1217 00:49:26.766431 1170766 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 00:49:26.766438 1170766 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 00:49:26.766447 1170766 command_runner.go:130] > # can be restricted/rejected:
	I1217 00:49:26.766451 1170766 command_runner.go:130] > # - OCI hook injection
	I1217 00:49:26.766456 1170766 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 00:49:26.766466 1170766 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 00:49:26.766471 1170766 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 00:49:26.766475 1170766 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 00:49:26.766486 1170766 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 00:49:26.766493 1170766 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 00:49:26.766498 1170766 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 00:49:26.766501 1170766 command_runner.go:130] > #
	I1217 00:49:26.766505 1170766 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 00:49:26.766509 1170766 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 00:49:26.766519 1170766 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 00:49:26.766525 1170766 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 00:49:26.766531 1170766 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 00:49:26.766540 1170766 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 00:49:26.766545 1170766 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 00:49:26.766550 1170766 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 00:49:26.766558 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766567 1170766 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 00:49:26.766574 1170766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 00:49:26.766579 1170766 command_runner.go:130] > [crio.stats]
	I1217 00:49:26.766584 1170766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 00:49:26.766590 1170766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 00:49:26.766597 1170766 command_runner.go:130] > # stats_collection_period = 0
	I1217 00:49:26.766603 1170766 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 00:49:26.766610 1170766 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 00:49:26.766618 1170766 command_runner.go:130] > # collection_period = 0
	I1217 00:49:26.769313 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.709999291Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 00:49:26.769335 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710041801Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 00:49:26.769350 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.7100717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 00:49:26.769358 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710096963Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 00:49:26.769367 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710182557Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.769376 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710452795Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 00:49:26.769388 1170766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 00:49:26.769780 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:26.769799 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:26.769817 1170766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:49:26.769847 1170766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:49:26.769980 1170766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:49:26.770057 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:49:26.777246 1170766 command_runner.go:130] > kubeadm
	I1217 00:49:26.777268 1170766 command_runner.go:130] > kubectl
	I1217 00:49:26.777274 1170766 command_runner.go:130] > kubelet
	I1217 00:49:26.778436 1170766 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:49:26.778500 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:49:26.786236 1170766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:49:26.799825 1170766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:49:26.813059 1170766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 00:49:26.828019 1170766 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:49:26.831670 1170766 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:49:26.831993 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.960014 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:27.502236 1170766 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:49:27.502256 1170766 certs.go:195] generating shared ca certs ...
	I1217 00:49:27.502272 1170766 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:27.502407 1170766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:49:27.502457 1170766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:49:27.502465 1170766 certs.go:257] generating profile certs ...
	I1217 00:49:27.502566 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:49:27.502627 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:49:27.502667 1170766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:49:27.502675 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:49:27.502694 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:49:27.502705 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:49:27.502716 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:49:27.502725 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:49:27.502736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:49:27.502746 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:49:27.502759 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:49:27.502805 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:49:27.502840 1170766 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:49:27.502848 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:49:27.502873 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:49:27.502896 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:49:27.502918 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:49:27.502963 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:27.502994 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.503007 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.503017 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.503565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:49:27.523390 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:49:27.542159 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:49:27.560122 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:49:27.578247 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:49:27.596258 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:49:27.613943 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:49:27.632292 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:49:27.650819 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:49:27.669066 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:49:27.687617 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:49:27.705744 1170766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:49:27.719458 1170766 ssh_runner.go:195] Run: openssl version
	I1217 00:49:27.725722 1170766 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:49:27.726120 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.733628 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:49:27.741335 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745236 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745284 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745341 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.786230 1170766 command_runner.go:130] > 51391683
	I1217 00:49:27.786728 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:49:27.794669 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.802040 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:49:27.809799 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813741 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813839 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813906 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.854690 1170766 command_runner.go:130] > 3ec20f2e
	I1217 00:49:27.854778 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:49:27.862235 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.869424 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:49:27.877608 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881295 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881338 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881389 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.921808 1170766 command_runner.go:130] > b5213941
	I1217 00:49:27.922298 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:49:27.929684 1170766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933543 1170766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933568 1170766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:49:27.933576 1170766 command_runner.go:130] > Device: 259,1	Inode: 3648879     Links: 1
	I1217 00:49:27.933583 1170766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:27.933589 1170766 command_runner.go:130] > Access: 2025-12-17 00:45:19.435586201 +0000
	I1217 00:49:27.933595 1170766 command_runner.go:130] > Modify: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933600 1170766 command_runner.go:130] > Change: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933605 1170766 command_runner.go:130] >  Birth: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933682 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:49:27.974244 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:27.974730 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:49:28.015269 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.015758 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:49:28.065826 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.066538 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:49:28.108358 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.108531 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:49:28.149181 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.149647 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:49:28.190353 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.190474 1170766 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:28.190584 1170766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:49:28.190665 1170766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:49:28.221145 1170766 cri.go:89] found id: ""
	I1217 00:49:28.221267 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:49:28.228507 1170766 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:49:28.228597 1170766 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:49:28.228619 1170766 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:49:28.229395 1170766 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:49:28.229438 1170766 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:49:28.229512 1170766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:49:28.236906 1170766 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:49:28.237356 1170766 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389537" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.237502 1170766 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389537" cluster setting kubeconfig missing "functional-389537" context setting]
	I1217 00:49:28.237796 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.238221 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.238396 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.238920 1170766 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:49:28.238939 1170766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:49:28.238945 1170766 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:49:28.238950 1170766 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:49:28.238954 1170766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:49:28.238995 1170766 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:49:28.239224 1170766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:49:28.246965 1170766 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:49:28.247039 1170766 kubeadm.go:602] duration metric: took 17.573937ms to restartPrimaryControlPlane
	I1217 00:49:28.247066 1170766 kubeadm.go:403] duration metric: took 56.597633ms to StartCluster
	I1217 00:49:28.247104 1170766 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.247179 1170766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.247837 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.248043 1170766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:49:28.248489 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:28.248569 1170766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:49:28.248676 1170766 addons.go:70] Setting storage-provisioner=true in profile "functional-389537"
	I1217 00:49:28.248696 1170766 addons.go:239] Setting addon storage-provisioner=true in "functional-389537"
	I1217 00:49:28.248719 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.249218 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.251024 1170766 addons.go:70] Setting default-storageclass=true in profile "functional-389537"
	I1217 00:49:28.251049 1170766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389537"
	I1217 00:49:28.251367 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.254651 1170766 out.go:179] * Verifying Kubernetes components...
	I1217 00:49:28.257533 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:28.287633 1170766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:49:28.290502 1170766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.290526 1170766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:49:28.290609 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.312501 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.312677 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.312998 1170766 addons.go:239] Setting addon default-storageclass=true in "functional-389537"
	I1217 00:49:28.313045 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.313499 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.334272 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.347658 1170766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:28.347681 1170766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:49:28.347742 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.374030 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.486040 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:28.502536 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.510858 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.252938 1170766 node_ready.go:35] waiting up to 6m0s for node "functional-389537" to be "Ready" ...
	I1217 00:49:29.253062 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.253118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.253338 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253370 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253391 1170766 retry.go:31] will retry after 245.662002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253435 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253452 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253459 1170766 retry.go:31] will retry after 276.192706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253512 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.500088 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:29.530677 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.579588 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.579743 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.579792 1170766 retry.go:31] will retry after 478.611243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607395 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.607453 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607473 1170766 retry.go:31] will retry after 213.763614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.753751 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.822424 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.886054 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.886099 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.886150 1170766 retry.go:31] will retry after 580.108639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.059411 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.142412 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.142520 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.142548 1170766 retry.go:31] will retry after 335.340669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.253845 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.254297 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.466582 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:30.478378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.546834 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.546919 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.546953 1170766 retry.go:31] will retry after 1.248601584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557846 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.557940 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557983 1170766 retry.go:31] will retry after 1.081200972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.753182 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.253427 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.253542 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.253954 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:31.639465 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:31.698941 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.698993 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.699013 1170766 retry.go:31] will retry after 1.870151971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.754126 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.754197 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.754530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.795965 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:31.861932 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.861982 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.862003 1170766 retry.go:31] will retry after 1.008225242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.253184 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.253372 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.253717 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.753360 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.871155 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:32.928211 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:32.931741 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.931825 1170766 retry.go:31] will retry after 1.349013392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.253256 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.569378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:33.627393 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:33.631136 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.631170 1170766 retry.go:31] will retry after 1.556307432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.753384 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.753462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.753732 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.753786 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.253674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.281872 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:34.338860 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:34.338952 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.338994 1170766 retry.go:31] will retry after 2.730785051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.753261 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.753705 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.188371 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:35.253305 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.253379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.253659 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:35.253682 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253699 1170766 retry.go:31] will retry after 4.092845301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253755 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:36.253666 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.753252 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.753327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.070065 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:37.127098 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:37.130934 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.130970 1170766 retry.go:31] will retry after 4.776908541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.253166 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.753194 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.753659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.253587 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.253946 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.254001 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.753912 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.753994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.754371 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.254004 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.254408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.346816 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:39.407133 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:39.411576 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.411608 1170766 retry.go:31] will retry after 4.420378296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.753168 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.753277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.753541 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.253304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.753271 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.753349 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.753656 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.753707 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:41.253157 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.253546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.909084 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:41.968890 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:41.968925 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:41.968945 1170766 retry.go:31] will retry after 4.028082996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:42.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.253706 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.753164 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.753238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.753522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.253354 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.253724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:43.253792 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.753558 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.753644 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.753949 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.832189 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:43.890902 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:43.894375 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:43.894408 1170766 retry.go:31] will retry after 8.166287631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:44.253620 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.753652 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.753996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.253708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.254080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:45.254153 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.753590 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.753659 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.753909 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.997293 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:46.061414 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:46.061451 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.061470 1170766 retry.go:31] will retry after 11.083982648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.253886 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.253962 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.254309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.754095 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.754205 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.754534 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.253185 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.253531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.753195 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.753675 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:48.253335 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.253411 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.253779 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.753583 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.753654 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.253646 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.254063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.753928 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.754007 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.754325 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.754377 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.253612 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.253695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.253960 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.753804 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.753885 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.254063 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.254137 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.254480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.753480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.060996 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:52.120691 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:52.124209 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.124248 1170766 retry.go:31] will retry after 5.294346985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.253619 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.254054 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.753693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.753855 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.754194 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.254037 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.254206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.254462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.254510 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.753239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.753523 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.253651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.753370 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.753449 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.753783 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.253341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.753617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.753681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.146315 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:57.205486 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.209162 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.209194 1170766 retry.go:31] will retry after 16.847278069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.253385 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.253754 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.419134 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:57.479419 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.482994 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.483029 1170766 retry.go:31] will retry after 11.356263683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.753493 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.253330 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.253407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.753639 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.753716 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.754093 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.754160 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.753887 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.754215 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.253724 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.253810 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.254155 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.754120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.754206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.754562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:00.754621 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.253240 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.253370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.253698 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.253607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.253193 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.253613 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.753572 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.754045 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.253947 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.254268 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.253850 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.254364 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:05.754125 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.754208 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.754551 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.253164 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.253237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.253346 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.253428 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.253751 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.753540 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.753830 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:07.753881 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.253424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.253762 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.753666 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.753745 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.754125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.840442 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:08.894240 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:08.898223 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:08.898257 1170766 retry.go:31] will retry after 31.216976051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:09.253588 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.253672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.753741 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.754120 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:09.754170 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.253935 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.254009 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.253844 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.254271 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.754088 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.754175 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.754499 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:11.754558 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:12.253187 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.253522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.753227 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.753589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.753701 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.057576 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:14.115415 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:14.119129 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.119165 1170766 retry.go:31] will retry after 28.147339136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.253462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.253544 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.253877 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.253932 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:14.753601 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.753672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.753968 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.253641 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.253732 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.253997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.753777 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.253982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:16.254362 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:16.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.754016 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.253840 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.254281 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.754086 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.754162 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.754503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.253672 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.753651 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.753736 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.754062 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:18.754120 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:19.253943 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.254033 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.254372 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.753082 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.753159 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.753506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.753388 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.753479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.753884 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.253955 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.254007 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:21.753781 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.753865 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.254001 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.254355 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.753077 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.753153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.753404 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.253112 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.253188 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.253528 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.753620 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:23.753996 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.253660 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.253733 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.254004 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.753783 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.753862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.754204 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.253869 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.253944 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.254293 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:25.754034 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:26.253773 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.253845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.753983 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.754381 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.253979 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.753096 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.753176 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.753474 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.253306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:28.753591 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.753916 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.253231 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.753237 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.753688 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.753320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:30.753699 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.253635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.753306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.753379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.753638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.253669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.753350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.753691 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:32.753743 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.253478 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.253794 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.753653 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.754080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.253900 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.254314 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.754008 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:34.754052 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.253945 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.254265 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.753720 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.754034 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.253598 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.753708 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.753783 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.754104 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:36.754165 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:37.253918 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.253995 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.254311 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.753961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.253926 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.254006 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.254296 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.754122 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.754199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.754549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:38.754615 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:39.253269 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.253710 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.753180 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.753267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.753624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.116186 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:40.183350 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:40.183412 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.183435 1170766 retry.go:31] will retry after 25.382750455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.253664 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.254066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.753634 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.753706 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.753966 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.253718 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.253791 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.254134 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:41.254188 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:41.754033 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.754109 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.754488 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.253178 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.253257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.253626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.266982 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:42.344498 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:42.344537 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.344558 1170766 retry.go:31] will retry after 17.409313592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.753120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.753194 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.253776 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.253851 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.753822 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.753901 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.754256 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.253756 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.253922 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.254427 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.753326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:46.753299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.753383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.253287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.753623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.253662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.253705 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:48.753669 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.753752 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.754072 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.253894 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.253970 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.254291 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.753636 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.253843 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.253926 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.254289 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:50.254345 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:50.754111 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.754190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.754553 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.253242 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:52.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.754044 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.754422 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.253188 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.753632 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:54.753689 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.253391 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.253469 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.753512 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.753582 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.753864 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.253390 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:57.753200 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.753283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.753631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.253436 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.253523 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.253931 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.753948 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.754017 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.754272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.254035 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.254118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.254476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.254537 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:59.753199 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.754864 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:59.815839 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815879 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815961 1170766 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:00.253363 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.753302 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.753369 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.753727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:01.753787 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.253347 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.253689 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.753247 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.753324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.753665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.754077 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:03.754136 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.253779 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.254148 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.753646 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.753717 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.753978 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.253862 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.253937 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.254272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.566658 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:51:05.627909 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.627957 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.628043 1170766 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:05.631111 1170766 out.go:179] * Enabled addons: 
	I1217 00:51:05.634718 1170766 addons.go:530] duration metric: took 1m37.386158891s for enable addons: enabled=[]
	I1217 00:51:05.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.753674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.253279 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.253356 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.253651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:06.753202 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.753286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.753613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.253337 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.253416 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.753382 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.753456 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.753719 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.253394 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:08.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:08.753597 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.753675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.754006 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.253704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.753759 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.754219 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.254036 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.254117 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.254443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:10.254499 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:10.753146 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.753222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.753504 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.253431 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.253508 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.253817 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.753238 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:12.753661 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:13.253229 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.753608 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.753242 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.753314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.753606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.253289 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.253371 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.253681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:15.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.753291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.253602 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.253661 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:17.253717 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:17.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.753577 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.253297 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.253364 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.253668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.753853 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.754277 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.254102 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.254185 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.254526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:19.254586 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:19.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.753311 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.753580 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.253722 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.753652 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.253372 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.253701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.753406 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.753495 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:21.753874 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:22.253257 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.753561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.753603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.753685 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.754925 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1217 00:51:23.754986 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:24.253170 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.253267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.253617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.753328 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.753409 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.753746 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.253469 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.253546 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.253880 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.753657 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.753917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.253603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.253711 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.254049 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:26.254102 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:26.753618 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.753694 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.253707 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.753801 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.754135 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.253730 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.253819 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.254157 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:28.254213 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:28.754062 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.754150 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.754428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.253246 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.753316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.753701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.753680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:30.753758 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.253283 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.753617 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.753891 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.253582 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.753873 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.753956 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.754335 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:32.754410 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:33.253082 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.253153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.253408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.753211 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.253332 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.253414 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.253813 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.753517 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.753595 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.753879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.253210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.253725 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:35.753393 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.753476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.753815 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.253180 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.253769 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.253245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.253568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.753118 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.753199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.753448 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:37.753489 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.253352 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.253435 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.253790 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.753633 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.753713 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.754052 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.253630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.253702 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.254026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.754056 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:39.754113 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.253723 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.253798 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.254106 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.754024 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.253834 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.253927 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.254334 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.754152 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.754231 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.754552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:41.754611 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:42.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.753201 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.253361 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.253440 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.753589 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.753665 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.253738 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.253820 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.254118 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.254169 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:44.753956 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.754034 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.754376 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.253875 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.253954 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.254382 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.753128 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.753548 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.253245 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.253330 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.753570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:46.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.253226 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.253657 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.753364 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.753750 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.253392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.753652 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.753737 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.754073 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:48.754130 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.253766 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.253847 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.254210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.753704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.253788 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.253862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.254182 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.753997 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.754076 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.754412 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:50.754497 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:51.253162 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.253230 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.753249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.753596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.753309 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.753387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.753660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.253234 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.253323 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.253702 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:53.253761 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:53.753655 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.753749 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.754112 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.253936 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.753647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.253232 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.253310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.753558 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:55.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:56.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.253610 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.753338 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.253172 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.253533 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.753301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.753667 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:57.753734 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:58.253323 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.753599 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.753674 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.253780 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.253867 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.254242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.754384 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.754441 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:00.261843 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.262054 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.262449 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.753175 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.253252 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.253251 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.253328 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.253683 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:02.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.753677 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.253422 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.753719 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.753793 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:04.254346 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.753969 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.253791 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.253873 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.254220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.753910 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.753984 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.754315 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.253622 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.253718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.254014 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.753813 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.753893 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.754190 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.754244 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:07.254039 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.254467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.753167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.753245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.753517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.253390 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.753749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.753834 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.754171 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.253662 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.253741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.254087 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:09.254142 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:09.753914 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.753986 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.754327 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.254174 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.254257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.254595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.753297 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.753370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.253408 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.253499 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.253838 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.753526 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.753601 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.753894 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:11.753949 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:12.253560 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.253632 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.753805 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.754169 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.253986 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.254075 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.254435 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.754109 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.754186 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.754492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:13.754550 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:14.253219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.253640 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.753663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.253555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.753256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.753575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:16.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.253273 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:16.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:16.753300 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.753651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.253388 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.253700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.753205 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:18.253374 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.253447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:18.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:18.753640 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.754063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.253884 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.253974 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.753695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:20.253726 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.253806 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.254124 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:20.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:20.753974 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.754048 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.754388 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.253158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.253440 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.253238 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.253660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:23.253247 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:23.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.253272 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.253348 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.253619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:25.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.253630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:25.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:25.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.753591 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.253192 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.253269 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.753310 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.753396 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.253223 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.253476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.753141 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.753220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:27.753593 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:28.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.253455 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.253810 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:28.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.753915 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.754546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.253345 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.753321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:29.753656 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:30.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.253247 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.253502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:30.753270 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.753355 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.753724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.753227 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.753572 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:32.253250 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:32.253716 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:32.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.253201 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.253278 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.753973 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:34.253749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.253821 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.254108 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:34.254159 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:34.753679 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.753775 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.253885 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.253959 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.754073 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.754148 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.754487 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.753378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.753774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:36.753831 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:37.253510 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.253591 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.253957 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:37.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.253981 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.254058 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.753581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:39.253264 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:39.253650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:39.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.253402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.253743 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.753429 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.753503 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.753767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:41.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:41.253687 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:41.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.753639 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.253183 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.253286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.253225 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.253637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.753531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:43.753579 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:44.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.253576 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:44.753186 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.753264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.753599 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.253295 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.253735 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.753614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:45.753668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:46.253322 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.253398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:46.753161 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.753496 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.253291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:47.753697 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:48.253324 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:48.753549 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.753624 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.253784 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.753644 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.753723 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.754017 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:49.754065 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:50.253810 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.254239 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:50.753899 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.753975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.754306 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.253987 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.753830 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.753910 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.754242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:51.754311 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:52.254071 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.254149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.254484 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:52.753615 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.753942 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.253691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.254010 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.753953 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.754027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.754345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:53.754402 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.253938 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:54.753755 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.753827 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.754137 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.253941 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.254028 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.254370 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.754085 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.754158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:55.754529 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:56.253088 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.253170 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.253491 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.753629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.253537 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.753221 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:58.253261 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.253670 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:58.253729 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:58.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.754036 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.253988 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.254385 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.753169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:00.255875 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.256036 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.256356 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:00.256590 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:00.753314 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.753406 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.753729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.253451 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.253526 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.253836 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.753275 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.753592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.253224 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.753389 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:02.753739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:03.253378 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.253737 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:03.753753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.753845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.754210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.253955 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.254035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.753974 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:04.754016 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:05.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.254027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:05.753103 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.753190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.753552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.253106 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.253183 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.253481 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.753270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.753579 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:07.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.253288 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.253606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:07.253665 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:07.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.753237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.753615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.253519 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.253592 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.253905 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.754029 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.754407 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:09.253610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.253927 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:09.253968 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:09.753621 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.253736 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.253811 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.254126 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.753682 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.753989 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:11.253783 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:11.254252 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:11.754017 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.754095 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.754418 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.253174 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.253431 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.753169 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.753584 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.253725 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.753680 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.753954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:13.753997 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:14.253722 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.253802 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.254151 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:14.753815 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.753891 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.754223 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.254029 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.753806 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.753888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.754227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:15.754287 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:16.254074 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.254151 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.254498 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:16.753147 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.753225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.753479 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.253581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.753255 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:18.253273 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.253344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.253604 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:18.253646 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:18.753564 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.753634 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.253242 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.753337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:20.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:20.253718 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:20.753425 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.753514 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.753897 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.253583 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.753341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.753692 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.253625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.753263 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.753343 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:23.253236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.253636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:23.753622 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.754022 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.253690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.753690 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.753765 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:24.754119 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:25.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.253969 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.254295 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:25.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.253798 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.253879 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.254195 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.754019 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.754098 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.754443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:26.754501 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.253228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:27.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.753266 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.253426 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.253518 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.253857 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.753672 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.753767 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:29.254090 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.254181 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.254562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:29.254618 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:29.753292 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.753381 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.753726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.253402 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.253471 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.753408 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.753487 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.753850 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.753559 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:31.753600 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:32.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:32.755552 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.755633 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.755956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.253924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.753903 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.753982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.754307 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:33.754366 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:34.254124 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.254211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.254539 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:34.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.753398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:36.253412 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.253489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.253839 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:36.253891 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:36.753198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.753274 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.253727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.753428 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.753500 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.753749 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:38.253689 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.253766 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.254125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:38.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:38.753984 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.754059 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.754410 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.253122 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.253198 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.253459 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.753151 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.753259 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.753585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.253413 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.253767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.753533 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:40.753859 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:41.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.253596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:41.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.753268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.753605 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.253303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:43.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.253419 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:43.254022 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:43.753920 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.754014 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.754333 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.253118 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.253201 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.253526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:45.255002 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.255152 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.255478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:45.255533 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.753317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.253364 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.253796 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.753240 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.753574 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.753402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.753748 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:47.753808 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:48.253313 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:48.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.754069 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.253753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.253830 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.254168 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.753643 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.753731 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.754066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:49.754148 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:50.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.253994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:50.753099 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.753189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.253251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.253515 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:52.253411 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.253511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.253890 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:52.253964 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:52.753645 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.753719 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.253775 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.254202 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.754104 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.754180 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.754506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.253165 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.253239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.253494 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:54.753683 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:55.253358 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.253438 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.253774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:55.753173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.253177 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.253263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.253600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.753404 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:56.753805 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:57.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.253238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.253497 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.253572 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.253908 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.753944 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:58.753983 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:59.253741 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.253823 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.254166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:59.753959 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.754035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.253101 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.253195 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.753249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.753333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:01.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.253476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.253809 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:01.253884 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:01.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.753357 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.253331 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.253412 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.253739 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.753476 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.753557 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.753921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:03.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:03.253961 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:03.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.253243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.753367 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.753380 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.753466 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.753795 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:05.753852 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:06.253248 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:06.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.253321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.753244 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.753502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:08.253274 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.253352 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.253726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:08.253781 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:08.753770 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.753843 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.754162 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.253596 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.253675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.253945 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.753821 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.753904 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.754197 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:10.254043 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.254442 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:10.254495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:10.753142 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.753213 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.753467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.753382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.753753 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.253233 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.753305 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:12.753685 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:13.253376 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.253460 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.253784 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:13.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.753691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.253819 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.253898 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.254259 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.754072 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.754149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.754478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:14.754538 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:15.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.253248 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.253513 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:15.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.253764 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.753262 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:17.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.253350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.253713 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:17.253779 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:17.753480 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.753569 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.253664 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.753923 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.754002 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.754397 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.253225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.753254 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:19.753636 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:20.253334 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:20.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:21.753680 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:22.253524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.254279 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:22.753625 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.753972 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.253809 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.253888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.254196 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.754021 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.754101 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.754439 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:23.754495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:24.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.253250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.253623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:24.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.753607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.253403 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.253757 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.753263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.753530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:26.253258 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.253351 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.253693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:26.253746 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:26.753414 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.753490 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.753826 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.253253 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.753244 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.753673 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:28.253404 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.253479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.253776 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:28.253819 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:28.753595 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.753935 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.253381 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.253465 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.253954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.753737 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.753815 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:30.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:30.253995 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:30.753581 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.753666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.753956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.253745 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.253824 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.254143 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.753606 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.754026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:32.253830 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.254262 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:32.254319 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:32.754091 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.754169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.754555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.253132 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.253222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.753524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.753608 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.753895 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.253698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.753696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.753951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:34.753991 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:35.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.253882 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.254227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:35.754036 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.754112 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.754409 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.253093 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.253164 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.253416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.753271 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:37.253294 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.253378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.253664 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:37.253713 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:37.753375 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.253304 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.253376 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.753592 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.754003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:39.253608 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.253678 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.253933 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:39.253982 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:39.753743 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.753818 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.754166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.253997 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.254396 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.753116 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.253645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.753348 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.753424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.753761 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:41.753817 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:42.265137 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.265218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.265549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:42.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.753653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.253379 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.253788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.753627 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.753708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:44.253832 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.254217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:44.754035 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.754111 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.754446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.253168 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.753612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:46.253316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.253442 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:46.253773 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:46.753434 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.753511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.753766 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.253200 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.253277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.253570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.753267 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.753344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.753625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:48.253552 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.253626 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.253879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:48.253930 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:48.753836 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.753911 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.754217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.254026 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.254106 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.753598 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.753686 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:50.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.254209 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:50.254259 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:50.754039 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.754125 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.253140 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.253209 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.253462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.753185 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.253618 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.753250 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.753598 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:52.753651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:53.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:53.753664 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.753741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.754081 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.253591 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.253669 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.254015 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.753866 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.753946 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.754274 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:54.754329 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:55.254057 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.254131 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.254446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:55.753121 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.753211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.753456 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.253253 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.253557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:57.253260 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:57.253672 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:57.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.753392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.253532 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.253606 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.253910 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:59.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.253799 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.254130 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:59.254190 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:59.753954 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.754031 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.754326 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.260359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.261314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.266189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.753848 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:01.253975 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.254047 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:01.254396 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:01.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.753698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.753979 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.253768 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.253842 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.254146 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.753881 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.754220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.253637 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.253710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.253982 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.753913 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.753997 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.754309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:03.754367 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:04.254115 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.254189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.254536 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:04.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.753161 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.753416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.253139 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.253218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.253585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.753162 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.753568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:06.253362 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.253441 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:06.253744 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.753378 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.753454 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.753700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:08.253293 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.253374 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.253718 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:08.253778 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:08.753537 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.753616 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.253654 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.253730 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.254027 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.753808 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.753883 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.754221 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:10.254047 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.254124 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.254490 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:10.254545 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:10.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.753567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.253638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.753650 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.253202 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.253270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.253527 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:12.753698 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:13.253182 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.253256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.253592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:13.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.753477 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.753394 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.753489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.753829 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:14.753892 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:15.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:15.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.753586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.253207 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.753176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.753503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:17.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:17.253694 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:17.753223 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.253586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.753702 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.753779 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.754110 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:19.253939 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.254018 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.254367 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:19.254421 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:19.754123 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.754196 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.754517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.253190 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.753740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.253432 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.253502 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.253792 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.753215 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.753636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:21.753701 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:22.253195 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.253276 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:22.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.253220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.253589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:24.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.253665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:24.253703 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:24.753359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.753447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.753788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.253481 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.253571 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.253917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.753617 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.753635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:26.753691 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:27.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:27.753345 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.753799 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.253586 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.253666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.253996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.753669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:28.753726 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:29.253176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:29.253235 1170766 node_ready.go:38] duration metric: took 6m0.000252571s for node "functional-389537" to be "Ready" ...
	I1217 00:55:29.256355 1170766 out.go:203] 
	W1217 00:55:29.259198 1170766 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:55:29.259223 1170766 out.go:285] * 
	W1217 00:55:29.261375 1170766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:55:29.264098 1170766 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438930325Z" level=info msg="Using the internal default seccomp profile"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438939252Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438944848Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438951256Z" level=info msg="RDT not available in the host system"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.438966747Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.439855023Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.439883519Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.439901061Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.440753933Z" level=info msg="Conmon does support the --sync option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.440781723Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.44093034Z" level=info msg="Updated default CNI network name to "
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.441785451Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.44229278Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.442353874Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493497332Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493537101Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493622753Z" level=info msg="Create NRI interface"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493738041Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493751661Z" level=info msg="runtime interface created"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493764321Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493771796Z" level=info msg="runtime interface starting up..."
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493778499Z" level=info msg="starting plugins..."
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493791717Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:49:26 functional-389537 crio[5405]: time="2025-12-17T00:49:26.493858186Z" level=info msg="No systemd watchdog enabled"
	Dec 17 00:49:26 functional-389537 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:33.850031    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:33.850703    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:33.852352    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:33.853018    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:33.854796    8848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:37] overlayfs: idmapped layers are currently not supported
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:55:33 up  6:38,  0 user,  load average: 0.02, 0.18, 0.68
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:55:31 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:31 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 17 00:55:31 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:31 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:31 functional-389537 kubelet[8723]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:31 functional-389537 kubelet[8723]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:31 functional-389537 kubelet[8723]: E1217 00:55:31.826694    8723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:31 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:31 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:32 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1133.
	Dec 17 00:55:32 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:32 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:32 functional-389537 kubelet[8744]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:32 functional-389537 kubelet[8744]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:32 functional-389537 kubelet[8744]: E1217 00:55:32.564299    8744 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:32 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:32 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:33 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1134.
	Dec 17 00:55:33 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:33 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:33 functional-389537 kubelet[8764]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:33 functional-389537 kubelet[8764]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:33 functional-389537 kubelet[8764]: E1217 00:55:33.325597    8764 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:33 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:33 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (376.223833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 kubectl -- --context functional-389537 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 kubectl -- --context functional-389537 get pods: exit status 1 (112.167917ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-389537 kubectl -- --context functional-389537 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (363.555394ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 logs -n 25: (1.038482716s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-099267 image ls --format short --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ functional-099267 ssh pgrep buildkitd                                                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │                     │
	│ image   │ functional-099267 image ls --format yaml --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format json --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format table --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p functional-099267                                                                                                                              │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ start   │ -p functional-389537 --alsologtostderr -v=8                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:49 UTC │                     │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:latest                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add minikube-local-cache-test:functional-389537                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache delete minikube-local-cache-test:functional-389537                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl images                                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cache   │ functional-389537 cache reload                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ kubectl │ functional-389537 kubectl -- --context functional-389537 get pods                                                                                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:49:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:49:23.461389 1170766 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:49:23.461547 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461559 1170766 out.go:374] Setting ErrFile to fd 2...
	I1217 00:49:23.461579 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461900 1170766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:49:23.462303 1170766 out.go:368] Setting JSON to false
	I1217 00:49:23.463185 1170766 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23514,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:49:23.463289 1170766 start.go:143] virtualization:  
	I1217 00:49:23.466912 1170766 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:49:23.469855 1170766 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:49:23.469995 1170766 notify.go:221] Checking for updates...
	I1217 00:49:23.475916 1170766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:49:23.478779 1170766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:23.481739 1170766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:49:23.484668 1170766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:49:23.487521 1170766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:49:23.490907 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:23.491070 1170766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:49:23.524450 1170766 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:49:23.524610 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.580909 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.571176137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.581015 1170766 docker.go:319] overlay module found
	I1217 00:49:23.585845 1170766 out.go:179] * Using the docker driver based on existing profile
	I1217 00:49:23.588706 1170766 start.go:309] selected driver: docker
	I1217 00:49:23.588726 1170766 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.588842 1170766 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:49:23.588945 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.644593 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.634960306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.645010 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:23.645070 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:23.645127 1170766 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.648351 1170766 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:49:23.651037 1170766 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:49:23.653878 1170766 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:49:23.656858 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:23.656904 1170766 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:49:23.656917 1170766 cache.go:65] Caching tarball of preloaded images
	I1217 00:49:23.656980 1170766 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:49:23.657013 1170766 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:49:23.657024 1170766 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:49:23.657126 1170766 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:49:23.675917 1170766 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:49:23.675939 1170766 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:49:23.675960 1170766 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:49:23.675991 1170766 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:49:23.676062 1170766 start.go:364] duration metric: took 47.228µs to acquireMachinesLock for "functional-389537"
	I1217 00:49:23.676087 1170766 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:49:23.676097 1170766 fix.go:54] fixHost starting: 
	I1217 00:49:23.676360 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:23.693660 1170766 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:49:23.693691 1170766 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:49:23.696944 1170766 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:49:23.696988 1170766 machine.go:94] provisionDockerMachine start ...
	I1217 00:49:23.697095 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.714561 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.714904 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.714921 1170766 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:49:23.856040 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:23.856064 1170766 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:49:23.856128 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.875306 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.875626 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.875637 1170766 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:49:24.024137 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:24.024222 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.043436 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.043770 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.043794 1170766 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:49:24.176920 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:49:24.176960 1170766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:49:24.176987 1170766 ubuntu.go:190] setting up certificates
	I1217 00:49:24.177005 1170766 provision.go:84] configureAuth start
	I1217 00:49:24.177076 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:24.194508 1170766 provision.go:143] copyHostCerts
	I1217 00:49:24.194553 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194603 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:49:24.194616 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194693 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:49:24.194827 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194850 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:49:24.194859 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194890 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:49:24.194946 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.194967 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:49:24.194975 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.195000 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:49:24.195062 1170766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:49:24.401567 1170766 provision.go:177] copyRemoteCerts
	I1217 00:49:24.401643 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:49:24.401688 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.419163 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:24.516584 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:49:24.516654 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:49:24.535526 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:49:24.535590 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:49:24.556116 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:49:24.556181 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:49:24.575533 1170766 provision.go:87] duration metric: took 398.504828ms to configureAuth
	I1217 00:49:24.575561 1170766 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:49:24.575753 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:24.575856 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.593152 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.593467 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.593486 1170766 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:49:24.914611 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:49:24.914655 1170766 machine.go:97] duration metric: took 1.217656857s to provisionDockerMachine
	I1217 00:49:24.914668 1170766 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:49:24.914681 1170766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:49:24.914755 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:49:24.914823 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.935845 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.036750 1170766 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:49:25.040402 1170766 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:49:25.040450 1170766 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:49:25.040457 1170766 command_runner.go:130] > VERSION_ID="12"
	I1217 00:49:25.040461 1170766 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:49:25.040466 1170766 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:49:25.040470 1170766 command_runner.go:130] > ID=debian
	I1217 00:49:25.040475 1170766 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:49:25.040479 1170766 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:49:25.040485 1170766 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:49:25.040531 1170766 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:49:25.040571 1170766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:49:25.040583 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:49:25.040642 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:49:25.040724 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:49:25.040736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 00:49:25.040812 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:49:25.040822 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> /etc/test/nested/copy/1136597/hosts
	I1217 00:49:25.040875 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:49:25.048565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:25.066116 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:49:25.083960 1170766 start.go:296] duration metric: took 169.276161ms for postStartSetup
	I1217 00:49:25.084042 1170766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:49:25.084089 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.101382 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.193085 1170766 command_runner.go:130] > 18%
	I1217 00:49:25.193644 1170766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:49:25.197890 1170766 command_runner.go:130] > 160G
	I1217 00:49:25.198395 1170766 fix.go:56] duration metric: took 1.522293417s for fixHost
	I1217 00:49:25.198422 1170766 start.go:83] releasing machines lock for "functional-389537", held for 1.522344181s
	I1217 00:49:25.198491 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:25.216362 1170766 ssh_runner.go:195] Run: cat /version.json
	I1217 00:49:25.216396 1170766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:49:25.216449 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.216473 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.237434 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.266075 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.438053 1170766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:49:25.438122 1170766 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:49:25.438253 1170766 ssh_runner.go:195] Run: systemctl --version
	I1217 00:49:25.444320 1170766 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:49:25.444367 1170766 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:49:25.444850 1170766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:49:25.480454 1170766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:49:25.484847 1170766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:49:25.484904 1170766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:49:25.484962 1170766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:49:25.493012 1170766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:49:25.493039 1170766 start.go:496] detecting cgroup driver to use...
	I1217 00:49:25.493090 1170766 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:49:25.493156 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:49:25.508569 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:49:25.521635 1170766 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:49:25.521740 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:49:25.537766 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:49:25.551122 1170766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:49:25.669862 1170766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:49:25.789898 1170766 docker.go:234] disabling docker service ...
	I1217 00:49:25.789984 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:49:25.805401 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:49:25.818559 1170766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:49:25.946131 1170766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:49:26.093460 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:49:26.106879 1170766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:49:26.120278 1170766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 00:49:26.121659 1170766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:49:26.121720 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.130856 1170766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:49:26.130968 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.140092 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.149223 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.158222 1170766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:49:26.166662 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.176047 1170766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.184976 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.194179 1170766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:49:26.201960 1170766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:49:26.202030 1170766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:49:26.209746 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.327753 1170766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:49:26.499257 1170766 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:49:26.499380 1170766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:49:26.502956 1170766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 00:49:26.502992 1170766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:49:26.503000 1170766 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1217 00:49:26.503008 1170766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:26.503016 1170766 command_runner.go:130] > Access: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503022 1170766 command_runner.go:130] > Modify: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503035 1170766 command_runner.go:130] > Change: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503041 1170766 command_runner.go:130] >  Birth: -
	I1217 00:49:26.503359 1170766 start.go:564] Will wait 60s for crictl version
	I1217 00:49:26.503439 1170766 ssh_runner.go:195] Run: which crictl
	I1217 00:49:26.507311 1170766 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:49:26.507416 1170766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:49:26.531135 1170766 command_runner.go:130] > Version:  0.1.0
	I1217 00:49:26.531410 1170766 command_runner.go:130] > RuntimeName:  cri-o
	I1217 00:49:26.531606 1170766 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 00:49:26.531797 1170766 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:49:26.534036 1170766 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:49:26.534147 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.559497 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.559533 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.559539 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.559545 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.559550 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.559554 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.559558 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.559563 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.559567 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.559570 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.559574 1170766 command_runner.go:130] >      static
	I1217 00:49:26.559578 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.559582 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.559598 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.559608 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.559612 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.559615 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.559620 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.559632 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.559637 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.561572 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.587741 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.587775 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.587782 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.587787 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.587793 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.587846 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.587858 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.587864 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.587877 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.587887 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.587891 1170766 command_runner.go:130] >      static
	I1217 00:49:26.587894 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.587897 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.587919 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.587929 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.587935 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.587950 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.587961 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.587966 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.587971 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.594651 1170766 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:49:26.597589 1170766 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:49:26.614215 1170766 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:49:26.618047 1170766 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:49:26.618237 1170766 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:49:26.618355 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:26.618425 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.651766 1170766 command_runner.go:130] > {
	I1217 00:49:26.651794 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.651799 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651810 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.651814 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651830 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.651837 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651841 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651850 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.651859 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.651866 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651870 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.651874 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651881 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651884 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651887 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651894 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.651901 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651911 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.651914 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651918 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651926 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.651935 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.651948 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651953 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.651957 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651963 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651970 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651973 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651980 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.651986 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651991 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.651994 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651998 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652006 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.652014 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.652026 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652030 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.652034 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.652038 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652041 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652044 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652051 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.652057 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652062 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.652065 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652069 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652077 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.652087 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.652091 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652095 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.652106 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652118 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652122 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652131 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652135 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652156 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652165 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652183 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.652204 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652210 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.652215 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652219 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652227 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.652238 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.652242 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652246 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.652252 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652256 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652260 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652266 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652271 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652274 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652277 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652284 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.652289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652296 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.652302 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652305 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652313 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.652322 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.652329 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652333 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.652337 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652344 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652350 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652354 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652358 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652361 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652364 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652371 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.652379 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652407 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.652458 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652463 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652470 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.652478 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.652526 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652536 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.652557 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652564 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652567 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652570 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652577 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.652589 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652595 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.652598 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652605 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652615 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.652653 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.652661 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652666 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.652670 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652674 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652677 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652681 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652689 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652696 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652702 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652708 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.652712 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652717 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.652722 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652726 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652734 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.652741 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.652747 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652751 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.652755 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652761 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.652765 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652775 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652779 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.652782 1170766 command_runner.go:130] >     }
	I1217 00:49:26.652785 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.652790 1170766 command_runner.go:130] > }
	I1217 00:49:26.655303 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.655332 1170766 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:49:26.655388 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.678896 1170766 command_runner.go:130] > {
	I1217 00:49:26.678916 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.678921 1170766 command_runner.go:130] >     {
	I1217 00:49:26.678929 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.678933 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.678939 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.678942 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678946 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.678958 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.678968 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.678972 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678976 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.678980 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.678990 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679002 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679020 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679027 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.679030 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679036 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.679039 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679043 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679056 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.679065 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.679071 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679075 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.679079 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679091 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679098 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679101 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679107 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.679111 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679119 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.679122 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679127 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679135 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.679146 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.679149 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679153 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.679160 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.679164 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679169 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679172 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679179 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.679185 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679190 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.679194 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679199 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679215 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.679225 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.679228 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679233 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.679239 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679243 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679249 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679257 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679264 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679268 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679271 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679277 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.679289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679294 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.679297 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679301 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679309 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.679317 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.679328 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679333 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.679336 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679340 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679344 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679351 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679355 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679365 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679368 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679375 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.679378 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679387 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.679390 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679394 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679405 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.679419 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.679423 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679427 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.679438 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679442 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679445 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679449 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679455 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679459 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679462 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679471 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.679476 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679481 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.679486 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679491 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679501 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.679517 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.679521 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679525 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.679529 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679535 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679543 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679549 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679555 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.679560 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679568 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.679574 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679577 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679586 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.679605 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.679612 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679616 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.679619 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679626 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679629 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679633 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679637 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679640 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679643 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679649 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.679655 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679660 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.679672 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679676 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679683 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.679691 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.679698 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679703 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.679706 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679710 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.679713 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679717 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679721 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.679727 1170766 command_runner.go:130] >     }
	I1217 00:49:26.679730 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.679735 1170766 command_runner.go:130] > }
	I1217 00:49:26.682128 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.682152 1170766 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:49:26.682160 1170766 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:49:26.682270 1170766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:49:26.682351 1170766 ssh_runner.go:195] Run: crio config
	I1217 00:49:26.731730 1170766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 00:49:26.731754 1170766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 00:49:26.731761 1170766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 00:49:26.731764 1170766 command_runner.go:130] > #
	I1217 00:49:26.731771 1170766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 00:49:26.731778 1170766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 00:49:26.731784 1170766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 00:49:26.731801 1170766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 00:49:26.731808 1170766 command_runner.go:130] > # reload'.
	I1217 00:49:26.731815 1170766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 00:49:26.731836 1170766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 00:49:26.731843 1170766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 00:49:26.731849 1170766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 00:49:26.731853 1170766 command_runner.go:130] > [crio]
	I1217 00:49:26.731859 1170766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 00:49:26.731866 1170766 command_runner.go:130] > # containers images, in this directory.
	I1217 00:49:26.732568 1170766 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 00:49:26.732592 1170766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 00:49:26.733157 1170766 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 00:49:26.733176 1170766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 00:49:26.733597 1170766 command_runner.go:130] > # imagestore = ""
	I1217 00:49:26.733614 1170766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 00:49:26.733623 1170766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 00:49:26.734179 1170766 command_runner.go:130] > # storage_driver = "overlay"
	I1217 00:49:26.734196 1170766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 00:49:26.734204 1170766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 00:49:26.734478 1170766 command_runner.go:130] > # storage_option = [
	I1217 00:49:26.734782 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.734798 1170766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 00:49:26.734807 1170766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 00:49:26.735378 1170766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 00:49:26.735394 1170766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 00:49:26.735411 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 00:49:26.735422 1170766 command_runner.go:130] > # always happen on a node reboot
	I1217 00:49:26.735984 1170766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 00:49:26.736023 1170766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 00:49:26.736036 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 00:49:26.736041 1170766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 00:49:26.736536 1170766 command_runner.go:130] > # version_file_persist = ""
	I1217 00:49:26.736561 1170766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 00:49:26.736570 1170766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 00:49:26.737150 1170766 command_runner.go:130] > # internal_wipe = true
	I1217 00:49:26.737173 1170766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 00:49:26.737180 1170766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 00:49:26.737739 1170766 command_runner.go:130] > # internal_repair = true
	I1217 00:49:26.737758 1170766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 00:49:26.737766 1170766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 00:49:26.737772 1170766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 00:49:26.738332 1170766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 00:49:26.738352 1170766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 00:49:26.738356 1170766 command_runner.go:130] > [crio.api]
	I1217 00:49:26.738361 1170766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 00:49:26.738921 1170766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 00:49:26.738940 1170766 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 00:49:26.739496 1170766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 00:49:26.739517 1170766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 00:49:26.739523 1170766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 00:49:26.740074 1170766 command_runner.go:130] > # stream_port = "0"
	I1217 00:49:26.740093 1170766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 00:49:26.740679 1170766 command_runner.go:130] > # stream_enable_tls = false
	I1217 00:49:26.740700 1170766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 00:49:26.741116 1170766 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 00:49:26.741133 1170766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 00:49:26.741147 1170766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 00:49:26.741613 1170766 command_runner.go:130] > # stream_tls_cert = ""
	I1217 00:49:26.741629 1170766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 00:49:26.741636 1170766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 00:49:26.742076 1170766 command_runner.go:130] > # stream_tls_key = ""
	I1217 00:49:26.742092 1170766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 00:49:26.742107 1170766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 00:49:26.742117 1170766 command_runner.go:130] > # automatically pick up the changes.
	I1217 00:49:26.742632 1170766 command_runner.go:130] > # stream_tls_ca = ""
	I1217 00:49:26.742675 1170766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743308 1170766 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 00:49:26.743331 1170766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743950 1170766 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 00:49:26.743971 1170766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 00:49:26.743978 1170766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 00:49:26.743981 1170766 command_runner.go:130] > [crio.runtime]
	I1217 00:49:26.743988 1170766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 00:49:26.743996 1170766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 00:49:26.744007 1170766 command_runner.go:130] > # "nofile=1024:2048"
	I1217 00:49:26.744021 1170766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 00:49:26.744329 1170766 command_runner.go:130] > # default_ulimits = [
	I1217 00:49:26.744680 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.744702 1170766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 00:49:26.745338 1170766 command_runner.go:130] > # no_pivot = false
	I1217 00:49:26.745359 1170766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 00:49:26.745367 1170766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 00:49:26.745979 1170766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 00:49:26.746000 1170766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 00:49:26.746006 1170766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 00:49:26.746013 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.746484 1170766 command_runner.go:130] > # conmon = ""
	I1217 00:49:26.746503 1170766 command_runner.go:130] > # Cgroup setting for conmon
	I1217 00:49:26.746512 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 00:49:26.746837 1170766 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 00:49:26.746859 1170766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 00:49:26.746866 1170766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 00:49:26.746875 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.747181 1170766 command_runner.go:130] > # conmon_env = [
	I1217 00:49:26.747508 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.747529 1170766 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 00:49:26.747536 1170766 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 00:49:26.747545 1170766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 00:49:26.747848 1170766 command_runner.go:130] > # default_env = [
	I1217 00:49:26.748185 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.748200 1170766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 00:49:26.748210 1170766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 00:49:26.750925 1170766 command_runner.go:130] > # selinux = false
	I1217 00:49:26.750948 1170766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 00:49:26.750958 1170766 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 00:49:26.750964 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.751661 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.751677 1170766 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 00:49:26.751683 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752150 1170766 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 00:49:26.752167 1170766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 00:49:26.752181 1170766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 00:49:26.752191 1170766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 00:49:26.752216 1170766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 00:49:26.752224 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752873 1170766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 00:49:26.752894 1170766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 00:49:26.752932 1170766 command_runner.go:130] > # the cgroup blockio controller.
	I1217 00:49:26.753417 1170766 command_runner.go:130] > # blockio_config_file = ""
	I1217 00:49:26.753438 1170766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 00:49:26.753444 1170766 command_runner.go:130] > # blockio parameters.
	I1217 00:49:26.754055 1170766 command_runner.go:130] > # blockio_reload = false
	I1217 00:49:26.754079 1170766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 00:49:26.754084 1170766 command_runner.go:130] > # irqbalance daemon.
	I1217 00:49:26.754673 1170766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 00:49:26.754692 1170766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 00:49:26.754700 1170766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 00:49:26.754708 1170766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 00:49:26.755498 1170766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 00:49:26.755515 1170766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 00:49:26.755521 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.756018 1170766 command_runner.go:130] > # rdt_config_file = ""
	I1217 00:49:26.756034 1170766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 00:49:26.756360 1170766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 00:49:26.756381 1170766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 00:49:26.756895 1170766 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 00:49:26.756917 1170766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 00:49:26.756925 1170766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 00:49:26.756935 1170766 command_runner.go:130] > # will be added.
	I1217 00:49:26.757272 1170766 command_runner.go:130] > # default_capabilities = [
	I1217 00:49:26.757675 1170766 command_runner.go:130] > # 	"CHOWN",
	I1217 00:49:26.758010 1170766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 00:49:26.758348 1170766 command_runner.go:130] > # 	"FSETID",
	I1217 00:49:26.758682 1170766 command_runner.go:130] > # 	"FOWNER",
	I1217 00:49:26.759200 1170766 command_runner.go:130] > # 	"SETGID",
	I1217 00:49:26.759214 1170766 command_runner.go:130] > # 	"SETUID",
	I1217 00:49:26.759238 1170766 command_runner.go:130] > # 	"SETPCAP",
	I1217 00:49:26.759246 1170766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 00:49:26.759249 1170766 command_runner.go:130] > # 	"KILL",
	I1217 00:49:26.759253 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759261 1170766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 00:49:26.759273 1170766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 00:49:26.759278 1170766 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 00:49:26.759290 1170766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 00:49:26.759297 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759305 1170766 command_runner.go:130] > default_sysctls = [
	I1217 00:49:26.759310 1170766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 00:49:26.759312 1170766 command_runner.go:130] > ]
	I1217 00:49:26.759317 1170766 command_runner.go:130] > # List of devices on the host that a
	I1217 00:49:26.759323 1170766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 00:49:26.759327 1170766 command_runner.go:130] > # allowed_devices = [
	I1217 00:49:26.759331 1170766 command_runner.go:130] > # 	"/dev/fuse",
	I1217 00:49:26.759338 1170766 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 00:49:26.759341 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759347 1170766 command_runner.go:130] > # List of additional devices. specified as
	I1217 00:49:26.759358 1170766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 00:49:26.759363 1170766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 00:49:26.759373 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759377 1170766 command_runner.go:130] > # additional_devices = [
	I1217 00:49:26.759380 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759386 1170766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 00:49:26.759396 1170766 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 00:49:26.759406 1170766 command_runner.go:130] > # 	"/etc/cdi",
	I1217 00:49:26.759411 1170766 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 00:49:26.759414 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759421 1170766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 00:49:26.759446 1170766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 00:49:26.759454 1170766 command_runner.go:130] > # Defaults to false.
	I1217 00:49:26.759459 1170766 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 00:49:26.759466 1170766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 00:49:26.759476 1170766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 00:49:26.759480 1170766 command_runner.go:130] > # hooks_dir = [
	I1217 00:49:26.759486 1170766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 00:49:26.759490 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759496 1170766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 00:49:26.759505 1170766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 00:49:26.759511 1170766 command_runner.go:130] > # its default mounts from the following two files:
	I1217 00:49:26.759515 1170766 command_runner.go:130] > #
	I1217 00:49:26.759522 1170766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 00:49:26.759532 1170766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 00:49:26.759537 1170766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 00:49:26.759540 1170766 command_runner.go:130] > #
	I1217 00:49:26.759546 1170766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 00:49:26.759556 1170766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 00:49:26.759563 1170766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 00:49:26.759569 1170766 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 00:49:26.759578 1170766 command_runner.go:130] > #
	I1217 00:49:26.759582 1170766 command_runner.go:130] > # default_mounts_file = ""
	I1217 00:49:26.759588 1170766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 00:49:26.759595 1170766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 00:49:26.759599 1170766 command_runner.go:130] > # pids_limit = -1
	I1217 00:49:26.759609 1170766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 00:49:26.759619 1170766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 00:49:26.759625 1170766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 00:49:26.759634 1170766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 00:49:26.759644 1170766 command_runner.go:130] > # log_size_max = -1
	I1217 00:49:26.759653 1170766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 00:49:26.759660 1170766 command_runner.go:130] > # log_to_journald = false
	I1217 00:49:26.759666 1170766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 00:49:26.759671 1170766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 00:49:26.759676 1170766 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 00:49:26.759681 1170766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 00:49:26.759686 1170766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 00:49:26.759694 1170766 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 00:49:26.759700 1170766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 00:49:26.759704 1170766 command_runner.go:130] > # read_only = false
	I1217 00:49:26.759714 1170766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 00:49:26.759721 1170766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 00:49:26.759725 1170766 command_runner.go:130] > # live configuration reload.
	I1217 00:49:26.759734 1170766 command_runner.go:130] > # log_level = "info"
	I1217 00:49:26.759741 1170766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 00:49:26.759762 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.759770 1170766 command_runner.go:130] > # log_filter = ""
	I1217 00:49:26.759776 1170766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759782 1170766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 00:49:26.759790 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759801 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.759809 1170766 command_runner.go:130] > # uid_mappings = ""
	I1217 00:49:26.759815 1170766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759821 1170766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 00:49:26.759825 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759833 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761229 1170766 command_runner.go:130] > # gid_mappings = ""
	I1217 00:49:26.761253 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 00:49:26.761260 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761266 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761274 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761925 1170766 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 00:49:26.761952 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 00:49:26.761960 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761966 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761974 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.762609 1170766 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 00:49:26.762630 1170766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 00:49:26.762637 1170766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 00:49:26.762643 1170766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 00:49:26.763842 1170766 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 00:49:26.763856 1170766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 00:49:26.763864 1170766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 00:49:26.763869 1170766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 00:49:26.763873 1170766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 00:49:26.763878 1170766 command_runner.go:130] > # drop_infra_ctr = true
	I1217 00:49:26.763885 1170766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 00:49:26.763900 1170766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 00:49:26.763909 1170766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 00:49:26.763919 1170766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 00:49:26.763926 1170766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 00:49:26.763932 1170766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 00:49:26.763938 1170766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 00:49:26.763943 1170766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 00:49:26.763947 1170766 command_runner.go:130] > # shared_cpuset = ""
	I1217 00:49:26.763953 1170766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 00:49:26.763958 1170766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 00:49:26.763963 1170766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 00:49:26.763976 1170766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 00:49:26.763980 1170766 command_runner.go:130] > # pinns_path = ""
	I1217 00:49:26.763986 1170766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 00:49:26.764001 1170766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 00:49:26.764011 1170766 command_runner.go:130] > # enable_criu_support = true
	I1217 00:49:26.764017 1170766 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 00:49:26.764022 1170766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 00:49:26.764027 1170766 command_runner.go:130] > # enable_pod_events = false
	I1217 00:49:26.764033 1170766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 00:49:26.764043 1170766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 00:49:26.764047 1170766 command_runner.go:130] > # default_runtime = "crun"
	I1217 00:49:26.764053 1170766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 00:49:26.764064 1170766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 00:49:26.764077 1170766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 00:49:26.764086 1170766 command_runner.go:130] > # creation as a file is not desired either.
	I1217 00:49:26.764094 1170766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 00:49:26.764101 1170766 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 00:49:26.764105 1170766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 00:49:26.764108 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.764115 1170766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 00:49:26.764124 1170766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 00:49:26.764131 1170766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 00:49:26.764141 1170766 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 00:49:26.764144 1170766 command_runner.go:130] > #
	I1217 00:49:26.764149 1170766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 00:49:26.764154 1170766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 00:49:26.764162 1170766 command_runner.go:130] > # runtime_type = "oci"
	I1217 00:49:26.764167 1170766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 00:49:26.764172 1170766 command_runner.go:130] > # inherit_default_runtime = false
	I1217 00:49:26.764194 1170766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 00:49:26.764203 1170766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 00:49:26.764208 1170766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 00:49:26.764212 1170766 command_runner.go:130] > # monitor_env = []
	I1217 00:49:26.764217 1170766 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 00:49:26.764225 1170766 command_runner.go:130] > # allowed_annotations = []
	I1217 00:49:26.764231 1170766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 00:49:26.764239 1170766 command_runner.go:130] > # no_sync_log = false
	I1217 00:49:26.764246 1170766 command_runner.go:130] > # default_annotations = {}
	I1217 00:49:26.764250 1170766 command_runner.go:130] > # stream_websockets = false
	I1217 00:49:26.764254 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.764304 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.764313 1170766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 00:49:26.764320 1170766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 00:49:26.764331 1170766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 00:49:26.764338 1170766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 00:49:26.764341 1170766 command_runner.go:130] > #   in $PATH.
	I1217 00:49:26.764347 1170766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 00:49:26.764352 1170766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 00:49:26.764359 1170766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 00:49:26.764366 1170766 command_runner.go:130] > #   state.
	I1217 00:49:26.764376 1170766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 00:49:26.764387 1170766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 00:49:26.764393 1170766 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 00:49:26.764400 1170766 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 00:49:26.764409 1170766 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 00:49:26.764454 1170766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 00:49:26.764462 1170766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 00:49:26.764468 1170766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 00:49:26.764475 1170766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 00:49:26.764480 1170766 command_runner.go:130] > #   The currently recognized values are:
	I1217 00:49:26.764486 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 00:49:26.764494 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 00:49:26.764504 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 00:49:26.764515 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 00:49:26.764524 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 00:49:26.764532 1170766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 00:49:26.764539 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 00:49:26.764554 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 00:49:26.764565 1170766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 00:49:26.764575 1170766 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 00:49:26.764586 1170766 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 00:49:26.764592 1170766 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 00:49:26.764599 1170766 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 00:49:26.764605 1170766 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 00:49:26.764611 1170766 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 00:49:26.764620 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 00:49:26.764629 1170766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 00:49:26.764634 1170766 command_runner.go:130] > #   deprecated option "conmon".
	I1217 00:49:26.764642 1170766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 00:49:26.764650 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 00:49:26.764658 1170766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 00:49:26.764668 1170766 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 00:49:26.764675 1170766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 00:49:26.764680 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 00:49:26.764688 1170766 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 00:49:26.764692 1170766 command_runner.go:130] > #   conmon-rs by using:
	I1217 00:49:26.764705 1170766 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 00:49:26.764713 1170766 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 00:49:26.764724 1170766 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 00:49:26.764731 1170766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 00:49:26.764740 1170766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 00:49:26.764747 1170766 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 00:49:26.764755 1170766 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 00:49:26.764760 1170766 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 00:49:26.764769 1170766 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 00:49:26.764778 1170766 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 00:49:26.764783 1170766 command_runner.go:130] > #   when a machine crash happens.
	I1217 00:49:26.764794 1170766 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 00:49:26.764803 1170766 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 00:49:26.764814 1170766 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 00:49:26.764819 1170766 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 00:49:26.764831 1170766 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 00:49:26.764843 1170766 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 00:49:26.764845 1170766 command_runner.go:130] > #
	I1217 00:49:26.764850 1170766 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 00:49:26.764853 1170766 command_runner.go:130] > #
	I1217 00:49:26.764859 1170766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 00:49:26.764870 1170766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 00:49:26.764873 1170766 command_runner.go:130] > #
	I1217 00:49:26.764881 1170766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 00:49:26.764890 1170766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 00:49:26.764894 1170766 command_runner.go:130] > #
	I1217 00:49:26.764900 1170766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 00:49:26.764907 1170766 command_runner.go:130] > # feature.
	I1217 00:49:26.764910 1170766 command_runner.go:130] > #
	I1217 00:49:26.764916 1170766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 00:49:26.764922 1170766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 00:49:26.764928 1170766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 00:49:26.764934 1170766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 00:49:26.764944 1170766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 00:49:26.764947 1170766 command_runner.go:130] > #
	I1217 00:49:26.764953 1170766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 00:49:26.764963 1170766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 00:49:26.764966 1170766 command_runner.go:130] > #
	I1217 00:49:26.764972 1170766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 00:49:26.764981 1170766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 00:49:26.764984 1170766 command_runner.go:130] > #
	I1217 00:49:26.764991 1170766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 00:49:26.764997 1170766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 00:49:26.765000 1170766 command_runner.go:130] > # limitation.
	I1217 00:49:26.765005 1170766 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 00:49:26.765010 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 00:49:26.765015 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765019 1170766 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 00:49:26.765028 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765047 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765056 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765061 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765065 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765069 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765073 1170766 command_runner.go:130] > allowed_annotations = [
	I1217 00:49:26.765077 1170766 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 00:49:26.765080 1170766 command_runner.go:130] > ]
	I1217 00:49:26.765084 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765089 1170766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 00:49:26.765093 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 00:49:26.765096 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765101 1170766 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 00:49:26.765110 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765114 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765119 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765124 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765132 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765136 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765141 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765148 1170766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 00:49:26.765158 1170766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 00:49:26.765165 1170766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 00:49:26.765173 1170766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 00:49:26.765184 1170766 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 00:49:26.765195 1170766 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 00:49:26.765205 1170766 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 00:49:26.765212 1170766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 00:49:26.765226 1170766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 00:49:26.765235 1170766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 00:49:26.765244 1170766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 00:49:26.765251 1170766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 00:49:26.765254 1170766 command_runner.go:130] > # Example:
	I1217 00:49:26.765266 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 00:49:26.765271 1170766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 00:49:26.765283 1170766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 00:49:26.765288 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 00:49:26.765297 1170766 command_runner.go:130] > # cpuset = "0-1"
	I1217 00:49:26.765301 1170766 command_runner.go:130] > # cpushares = "5"
	I1217 00:49:26.765305 1170766 command_runner.go:130] > # cpuquota = "1000"
	I1217 00:49:26.765309 1170766 command_runner.go:130] > # cpuperiod = "100000"
	I1217 00:49:26.765312 1170766 command_runner.go:130] > # cpulimit = "35"
	I1217 00:49:26.765317 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.765321 1170766 command_runner.go:130] > # The workload name is workload-type.
	I1217 00:49:26.765337 1170766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 00:49:26.765342 1170766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 00:49:26.765348 1170766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 00:49:26.765357 1170766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 00:49:26.765362 1170766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 00:49:26.765372 1170766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 00:49:26.765378 1170766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 00:49:26.765388 1170766 command_runner.go:130] > # Default value is set to true
	I1217 00:49:26.765392 1170766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 00:49:26.765399 1170766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 00:49:26.765404 1170766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 00:49:26.765413 1170766 command_runner.go:130] > # Default value is set to 'false'
	I1217 00:49:26.765417 1170766 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 00:49:26.765422 1170766 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 00:49:26.765431 1170766 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 00:49:26.765434 1170766 command_runner.go:130] > # timezone = ""
	I1217 00:49:26.765440 1170766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 00:49:26.765444 1170766 command_runner.go:130] > #
	I1217 00:49:26.765450 1170766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 00:49:26.765460 1170766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 00:49:26.765464 1170766 command_runner.go:130] > [crio.image]
	I1217 00:49:26.765470 1170766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 00:49:26.765481 1170766 command_runner.go:130] > # default_transport = "docker://"
	I1217 00:49:26.765487 1170766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 00:49:26.765498 1170766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765502 1170766 command_runner.go:130] > # global_auth_file = ""
	I1217 00:49:26.765506 1170766 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 00:49:26.765512 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765517 1170766 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.765523 1170766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 00:49:26.765536 1170766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765541 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765550 1170766 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 00:49:26.765556 1170766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 00:49:26.765562 1170766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 00:49:26.765574 1170766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 00:49:26.765580 1170766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 00:49:26.765583 1170766 command_runner.go:130] > # pause_command = "/pause"
	I1217 00:49:26.765589 1170766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 00:49:26.765595 1170766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 00:49:26.765606 1170766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 00:49:26.765612 1170766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 00:49:26.765624 1170766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 00:49:26.765630 1170766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 00:49:26.765638 1170766 command_runner.go:130] > # pinned_images = [
	I1217 00:49:26.765641 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765647 1170766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 00:49:26.765654 1170766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 00:49:26.765667 1170766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 00:49:26.765673 1170766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 00:49:26.765682 1170766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 00:49:26.765687 1170766 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 00:49:26.765692 1170766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 00:49:26.765703 1170766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 00:49:26.765709 1170766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 00:49:26.765722 1170766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 00:49:26.765729 1170766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 00:49:26.765738 1170766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 00:49:26.765749 1170766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 00:49:26.765755 1170766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 00:49:26.765762 1170766 command_runner.go:130] > # changing them here.
	I1217 00:49:26.765771 1170766 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 00:49:26.765775 1170766 command_runner.go:130] > # insecure_registries = [
	I1217 00:49:26.765778 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765785 1170766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 00:49:26.765793 1170766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 00:49:26.765799 1170766 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 00:49:26.765805 1170766 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 00:49:26.765813 1170766 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 00:49:26.765819 1170766 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 00:49:26.765831 1170766 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 00:49:26.765835 1170766 command_runner.go:130] > # auto_reload_registries = false
	I1217 00:49:26.765842 1170766 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 00:49:26.765854 1170766 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 00:49:26.765860 1170766 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 00:49:26.765868 1170766 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 00:49:26.765872 1170766 command_runner.go:130] > # The mode of short name resolution.
	I1217 00:49:26.765879 1170766 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 00:49:26.765891 1170766 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 00:49:26.765899 1170766 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 00:49:26.765908 1170766 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 00:49:26.765914 1170766 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 00:49:26.765920 1170766 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 00:49:26.765924 1170766 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 00:49:26.765930 1170766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 00:49:26.765933 1170766 command_runner.go:130] > # CNI plugins.
	I1217 00:49:26.765942 1170766 command_runner.go:130] > [crio.network]
	I1217 00:49:26.765948 1170766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 00:49:26.765958 1170766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 00:49:26.765965 1170766 command_runner.go:130] > # cni_default_network = ""
	I1217 00:49:26.765972 1170766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 00:49:26.765976 1170766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 00:49:26.765982 1170766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 00:49:26.765989 1170766 command_runner.go:130] > # plugin_dirs = [
	I1217 00:49:26.765992 1170766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 00:49:26.765995 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765999 1170766 command_runner.go:130] > # List of included pod metrics.
	I1217 00:49:26.766003 1170766 command_runner.go:130] > # included_pod_metrics = [
	I1217 00:49:26.766006 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766012 1170766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 00:49:26.766015 1170766 command_runner.go:130] > [crio.metrics]
	I1217 00:49:26.766020 1170766 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 00:49:26.766031 1170766 command_runner.go:130] > # enable_metrics = false
	I1217 00:49:26.766037 1170766 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 00:49:26.766046 1170766 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 00:49:26.766053 1170766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 00:49:26.766061 1170766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 00:49:26.766070 1170766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 00:49:26.766074 1170766 command_runner.go:130] > # metrics_collectors = [
	I1217 00:49:26.766078 1170766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 00:49:26.766083 1170766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 00:49:26.766087 1170766 command_runner.go:130] > # 	"containers_oom_total",
	I1217 00:49:26.766090 1170766 command_runner.go:130] > # 	"processes_defunct",
	I1217 00:49:26.766094 1170766 command_runner.go:130] > # 	"operations_total",
	I1217 00:49:26.766099 1170766 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 00:49:26.766103 1170766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 00:49:26.766107 1170766 command_runner.go:130] > # 	"operations_errors_total",
	I1217 00:49:26.766111 1170766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 00:49:26.766116 1170766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 00:49:26.766120 1170766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 00:49:26.766123 1170766 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 00:49:26.766131 1170766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 00:49:26.766140 1170766 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 00:49:26.766144 1170766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 00:49:26.766149 1170766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 00:49:26.766160 1170766 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 00:49:26.766163 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766169 1170766 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 00:49:26.766173 1170766 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 00:49:26.766178 1170766 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 00:49:26.766182 1170766 command_runner.go:130] > # metrics_port = 9090
	I1217 00:49:26.766187 1170766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 00:49:26.766195 1170766 command_runner.go:130] > # metrics_socket = ""
	I1217 00:49:26.766200 1170766 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 00:49:26.766206 1170766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 00:49:26.766216 1170766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 00:49:26.766221 1170766 command_runner.go:130] > # certificate on any modification event.
	I1217 00:49:26.766224 1170766 command_runner.go:130] > # metrics_cert = ""
	I1217 00:49:26.766230 1170766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 00:49:26.766239 1170766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 00:49:26.766243 1170766 command_runner.go:130] > # metrics_key = ""
	I1217 00:49:26.766249 1170766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 00:49:26.766252 1170766 command_runner.go:130] > [crio.tracing]
	I1217 00:49:26.766257 1170766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 00:49:26.766261 1170766 command_runner.go:130] > # enable_tracing = false
	I1217 00:49:26.766266 1170766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 00:49:26.766270 1170766 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 00:49:26.766277 1170766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 00:49:26.766287 1170766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 00:49:26.766292 1170766 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 00:49:26.766295 1170766 command_runner.go:130] > [crio.nri]
	I1217 00:49:26.766300 1170766 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 00:49:26.766308 1170766 command_runner.go:130] > # enable_nri = true
	I1217 00:49:26.766312 1170766 command_runner.go:130] > # NRI socket to listen on.
	I1217 00:49:26.766320 1170766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 00:49:26.766324 1170766 command_runner.go:130] > # NRI plugin directory to use.
	I1217 00:49:26.766328 1170766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 00:49:26.766333 1170766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 00:49:26.766338 1170766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 00:49:26.766343 1170766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 00:49:26.766396 1170766 command_runner.go:130] > # nri_disable_connections = false
	I1217 00:49:26.766406 1170766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 00:49:26.766411 1170766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 00:49:26.766416 1170766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 00:49:26.766420 1170766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 00:49:26.766425 1170766 command_runner.go:130] > # NRI default validator configuration.
	I1217 00:49:26.766431 1170766 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 00:49:26.766438 1170766 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 00:49:26.766447 1170766 command_runner.go:130] > # can be restricted/rejected:
	I1217 00:49:26.766451 1170766 command_runner.go:130] > # - OCI hook injection
	I1217 00:49:26.766456 1170766 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 00:49:26.766466 1170766 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 00:49:26.766471 1170766 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 00:49:26.766475 1170766 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 00:49:26.766486 1170766 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 00:49:26.766493 1170766 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 00:49:26.766498 1170766 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 00:49:26.766501 1170766 command_runner.go:130] > #
	I1217 00:49:26.766505 1170766 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 00:49:26.766509 1170766 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 00:49:26.766519 1170766 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 00:49:26.766525 1170766 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 00:49:26.766531 1170766 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 00:49:26.766540 1170766 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 00:49:26.766545 1170766 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 00:49:26.766550 1170766 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 00:49:26.766558 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766567 1170766 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 00:49:26.766574 1170766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 00:49:26.766579 1170766 command_runner.go:130] > [crio.stats]
	I1217 00:49:26.766584 1170766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 00:49:26.766590 1170766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 00:49:26.766597 1170766 command_runner.go:130] > # stats_collection_period = 0
	I1217 00:49:26.766603 1170766 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 00:49:26.766610 1170766 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 00:49:26.766618 1170766 command_runner.go:130] > # collection_period = 0
	I1217 00:49:26.769313 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.709999291Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 00:49:26.769335 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710041801Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 00:49:26.769350 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.7100717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 00:49:26.769358 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710096963Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 00:49:26.769367 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710182557Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.769376 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710452795Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 00:49:26.769388 1170766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 00:49:26.769780 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:26.769799 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:26.769817 1170766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:49:26.769847 1170766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:49:26.769980 1170766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:49:26.770057 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:49:26.777246 1170766 command_runner.go:130] > kubeadm
	I1217 00:49:26.777268 1170766 command_runner.go:130] > kubectl
	I1217 00:49:26.777274 1170766 command_runner.go:130] > kubelet
	I1217 00:49:26.778436 1170766 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:49:26.778500 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:49:26.786236 1170766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:49:26.799825 1170766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:49:26.813059 1170766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 00:49:26.828019 1170766 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:49:26.831670 1170766 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:49:26.831993 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.960014 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:27.502236 1170766 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:49:27.502256 1170766 certs.go:195] generating shared ca certs ...
	I1217 00:49:27.502272 1170766 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:27.502407 1170766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:49:27.502457 1170766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:49:27.502465 1170766 certs.go:257] generating profile certs ...
	I1217 00:49:27.502566 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:49:27.502627 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:49:27.502667 1170766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:49:27.502675 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:49:27.502694 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:49:27.502705 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:49:27.502716 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:49:27.502725 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:49:27.502736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:49:27.502746 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:49:27.502759 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:49:27.502805 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:49:27.502840 1170766 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:49:27.502848 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:49:27.502873 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:49:27.502896 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:49:27.502918 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:49:27.502963 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:27.502994 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.503007 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.503017 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.503565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:49:27.523390 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:49:27.542159 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:49:27.560122 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:49:27.578247 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:49:27.596258 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:49:27.613943 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:49:27.632292 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:49:27.650819 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:49:27.669066 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:49:27.687617 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:49:27.705744 1170766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:49:27.719458 1170766 ssh_runner.go:195] Run: openssl version
	I1217 00:49:27.725722 1170766 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:49:27.726120 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.733628 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:49:27.741335 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745236 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745284 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745341 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.786230 1170766 command_runner.go:130] > 51391683
	I1217 00:49:27.786728 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:49:27.794669 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.802040 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:49:27.809799 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813741 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813839 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813906 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.854690 1170766 command_runner.go:130] > 3ec20f2e
	I1217 00:49:27.854778 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:49:27.862235 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.869424 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:49:27.877608 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881295 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881338 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881389 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.921808 1170766 command_runner.go:130] > b5213941
	I1217 00:49:27.922298 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:49:27.929684 1170766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933543 1170766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933568 1170766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:49:27.933576 1170766 command_runner.go:130] > Device: 259,1	Inode: 3648879     Links: 1
	I1217 00:49:27.933583 1170766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:27.933589 1170766 command_runner.go:130] > Access: 2025-12-17 00:45:19.435586201 +0000
	I1217 00:49:27.933595 1170766 command_runner.go:130] > Modify: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933600 1170766 command_runner.go:130] > Change: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933605 1170766 command_runner.go:130] >  Birth: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933682 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:49:27.974244 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:27.974730 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:49:28.015269 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.015758 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:49:28.065826 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.066538 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:49:28.108358 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.108531 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:49:28.149181 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.149647 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:49:28.190353 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.190474 1170766 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:28.190584 1170766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:49:28.190665 1170766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:49:28.221145 1170766 cri.go:89] found id: ""
	I1217 00:49:28.221267 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:49:28.228507 1170766 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:49:28.228597 1170766 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:49:28.228619 1170766 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:49:28.229395 1170766 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:49:28.229438 1170766 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:49:28.229512 1170766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:49:28.236906 1170766 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:49:28.237356 1170766 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389537" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.237502 1170766 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389537" cluster setting kubeconfig missing "functional-389537" context setting]
	I1217 00:49:28.237796 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.238221 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.238396 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.238920 1170766 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:49:28.238939 1170766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:49:28.238945 1170766 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:49:28.238950 1170766 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:49:28.238954 1170766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:49:28.238995 1170766 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:49:28.239224 1170766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:49:28.246965 1170766 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:49:28.247039 1170766 kubeadm.go:602] duration metric: took 17.573937ms to restartPrimaryControlPlane
	I1217 00:49:28.247066 1170766 kubeadm.go:403] duration metric: took 56.597633ms to StartCluster
	I1217 00:49:28.247104 1170766 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.247179 1170766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.247837 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.248043 1170766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:49:28.248489 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:28.248569 1170766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:49:28.248676 1170766 addons.go:70] Setting storage-provisioner=true in profile "functional-389537"
	I1217 00:49:28.248696 1170766 addons.go:239] Setting addon storage-provisioner=true in "functional-389537"
	I1217 00:49:28.248719 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.249218 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.251024 1170766 addons.go:70] Setting default-storageclass=true in profile "functional-389537"
	I1217 00:49:28.251049 1170766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389537"
	I1217 00:49:28.251367 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.254651 1170766 out.go:179] * Verifying Kubernetes components...
	I1217 00:49:28.257533 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:28.287633 1170766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:49:28.290502 1170766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.290526 1170766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:49:28.290609 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.312501 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.312677 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.312998 1170766 addons.go:239] Setting addon default-storageclass=true in "functional-389537"
	I1217 00:49:28.313045 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.313499 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.334272 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.347658 1170766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:28.347681 1170766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:49:28.347742 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.374030 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.486040 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:28.502536 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.510858 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.252938 1170766 node_ready.go:35] waiting up to 6m0s for node "functional-389537" to be "Ready" ...
	I1217 00:49:29.253062 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.253118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.253338 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253370 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253391 1170766 retry.go:31] will retry after 245.662002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253435 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253452 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253459 1170766 retry.go:31] will retry after 276.192706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253512 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.500088 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:29.530677 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.579588 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.579743 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.579792 1170766 retry.go:31] will retry after 478.611243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607395 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.607453 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607473 1170766 retry.go:31] will retry after 213.763614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.753751 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.822424 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.886054 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.886099 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.886150 1170766 retry.go:31] will retry after 580.108639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.059411 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.142412 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.142520 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.142548 1170766 retry.go:31] will retry after 335.340669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.253845 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.254297 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.466582 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:30.478378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.546834 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.546919 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.546953 1170766 retry.go:31] will retry after 1.248601584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557846 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.557940 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557983 1170766 retry.go:31] will retry after 1.081200972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.753182 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.253427 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.253542 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.253954 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:31.639465 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:31.698941 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.698993 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.699013 1170766 retry.go:31] will retry after 1.870151971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.754126 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.754197 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.754530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.795965 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:31.861932 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.861982 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.862003 1170766 retry.go:31] will retry after 1.008225242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.253184 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.253372 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.253717 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.753360 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.871155 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:32.928211 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:32.931741 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.931825 1170766 retry.go:31] will retry after 1.349013392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.253256 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.569378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:33.627393 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:33.631136 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.631170 1170766 retry.go:31] will retry after 1.556307432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.753384 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.753462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.753732 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.753786 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.253674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.281872 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:34.338860 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:34.338952 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.338994 1170766 retry.go:31] will retry after 2.730785051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.753261 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.753705 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.188371 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:35.253305 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.253379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.253659 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:35.253682 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253699 1170766 retry.go:31] will retry after 4.092845301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253755 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:36.253666 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.753252 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.753327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.070065 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:37.127098 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:37.130934 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.130970 1170766 retry.go:31] will retry after 4.776908541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.253166 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.753194 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.753659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.253587 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.253946 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.254001 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.753912 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.753994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.754371 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.254004 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.254408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.346816 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:39.407133 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:39.411576 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.411608 1170766 retry.go:31] will retry after 4.420378296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.753168 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.753277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.753541 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.253304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.753271 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.753349 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.753656 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.753707 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:41.253157 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.253546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.909084 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:41.968890 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:41.968925 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:41.968945 1170766 retry.go:31] will retry after 4.028082996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:42.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.253706 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.753164 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.753238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.753522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.253354 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.253724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:43.253792 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.753558 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.753644 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.753949 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.832189 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:43.890902 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:43.894375 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:43.894408 1170766 retry.go:31] will retry after 8.166287631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:44.253620 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.753652 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.753996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.253708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.254080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:45.254153 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.753590 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.753659 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.753909 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.997293 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:46.061414 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:46.061451 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.061470 1170766 retry.go:31] will retry after 11.083982648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.253886 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.253962 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.254309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.754095 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.754205 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.754534 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.253185 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.253531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.753195 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.753675 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:48.253335 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.253411 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.253779 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.753583 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.753654 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.253646 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.254063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.753928 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.754007 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.754325 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.754377 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.253612 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.253695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.253960 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.753804 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.753885 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.254063 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.254137 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.254480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.753480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.060996 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:52.120691 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:52.124209 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.124248 1170766 retry.go:31] will retry after 5.294346985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.253619 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.254054 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.753693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.753855 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.754194 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.254037 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.254206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.254462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.254510 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.753239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.753523 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.253651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.753370 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.753449 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.753783 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.253341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.753617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.753681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.146315 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:57.205486 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.209162 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.209194 1170766 retry.go:31] will retry after 16.847278069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.253385 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.253754 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.419134 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:57.479419 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.482994 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.483029 1170766 retry.go:31] will retry after 11.356263683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.753493 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.253330 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.253407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.753639 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.753716 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.754093 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.754160 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.753887 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.754215 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.253724 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.253810 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.254155 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.754120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.754206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.754562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:00.754621 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.253240 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.253370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.253698 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.253607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.253193 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.253613 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.753572 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.754045 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.253947 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.254268 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.253850 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.254364 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:05.754125 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.754208 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.754551 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.253164 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.253237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.253346 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.253428 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.253751 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.753540 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.753830 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:07.753881 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.253424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.253762 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.753666 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.753745 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.754125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.840442 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:08.894240 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:08.898223 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:08.898257 1170766 retry.go:31] will retry after 31.216976051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:09.253588 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.253672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.753741 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.754120 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:09.754170 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.253935 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.254009 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.253844 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.254271 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.754088 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.754175 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.754499 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:11.754558 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:12.253187 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.253522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.753227 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.753589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.753701 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.057576 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:14.115415 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:14.119129 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.119165 1170766 retry.go:31] will retry after 28.147339136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.253462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.253544 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.253877 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.253932 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:14.753601 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.753672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.753968 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.253641 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.253732 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.253997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.753777 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.253982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:16.254362 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:16.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.754016 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.253840 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.254281 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.754086 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.754162 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.754503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.253672 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.753651 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.753736 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.754062 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:18.754120 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:19.253943 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.254033 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.254372 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.753082 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.753159 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.753506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.753388 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.753479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.753884 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.253955 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.254007 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:21.753781 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.753865 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.254001 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.254355 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.753077 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.753153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.753404 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.253112 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.253188 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.253528 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.753620 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:23.753996 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.253660 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.253733 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.254004 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.753783 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.753862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.754204 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.253869 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.253944 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.254293 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:25.754034 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:26.253773 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.253845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.753983 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.754381 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.253979 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.753096 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.753176 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.753474 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.253306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:28.753591 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.753916 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.253231 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.753237 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.753688 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.753320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:30.753699 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.253635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.753306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.753379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.753638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.253669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.753350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.753691 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:32.753743 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.253478 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.253794 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.753653 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.754080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.253900 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.254314 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.754008 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:34.754052 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.253945 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.254265 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.753720 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.754034 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.253598 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.753708 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.753783 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.754104 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:36.754165 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:37.253918 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.253995 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.254311 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.753961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.253926 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.254006 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.254296 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.754122 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.754199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.754549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:38.754615 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:39.253269 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.253710 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.753180 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.753267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.753624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.116186 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:40.183350 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:40.183412 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.183435 1170766 retry.go:31] will retry after 25.382750455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.253664 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.254066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.753634 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.753706 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.753966 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.253718 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.253791 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.254134 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:41.254188 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:41.754033 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.754109 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.754488 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.253178 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.253257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.253626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.266982 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:42.344498 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:42.344537 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.344558 1170766 retry.go:31] will retry after 17.409313592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.753120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.753194 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.253776 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.253851 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.753822 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.753901 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.754256 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.253756 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.253922 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.254427 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.753326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:46.753299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.753383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.253287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.753623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.253662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.253705 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:48.753669 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.753752 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.754072 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.253894 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.253970 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.254291 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.753636 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.253843 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.253926 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.254289 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:50.254345 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:50.754111 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.754190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.754553 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.253242 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:52.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.754044 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.754422 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.253188 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.753632 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:54.753689 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.253391 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.253469 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.753512 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.753582 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.753864 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.253390 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:57.753200 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.753283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.753631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.253436 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.253523 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.253931 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.753948 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.754017 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.754272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.254035 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.254118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.254476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.254537 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:59.753199 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.754864 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:59.815839 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815879 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815961 1170766 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:00.253363 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.753302 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.753369 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.753727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:01.753787 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.253347 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.253689 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.753247 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.753324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.753665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.754077 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:03.754136 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.253779 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.254148 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.753646 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.753717 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.753978 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.253862 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.253937 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.254272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.566658 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:51:05.627909 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.627957 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.628043 1170766 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:05.631111 1170766 out.go:179] * Enabled addons: 
	I1217 00:51:05.634718 1170766 addons.go:530] duration metric: took 1m37.386158891s for enable addons: enabled=[]
	I1217 00:51:05.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.753674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.253279 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.253356 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.253651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:06.753202 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.753286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.753613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.253337 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.253416 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.753382 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.753456 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.753719 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.253394 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:08.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:08.753597 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.753675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.754006 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.253704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.753759 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.754219 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.254036 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.254117 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.254443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:10.254499 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:10.753146 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.753222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.753504 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.253431 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.253508 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.253817 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.753238 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:12.753661 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:13.253229 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.753608 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.753242 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.753314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.753606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.253289 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.253371 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.253681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:15.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.753291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.253602 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.253661 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:17.253717 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:17.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.753577 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.253297 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.253364 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.253668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.753853 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.754277 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.254102 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.254185 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.254526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:19.254586 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:19.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.753311 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.753580 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.253722 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.753652 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.253372 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.253701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.753406 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.753495 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:21.753874 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:22.253257 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.753561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.753603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.753685 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.754925 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1217 00:51:23.754986 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:24.253170 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.253267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.253617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.753328 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.753409 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.753746 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.253469 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.253546 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.253880 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.753657 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.753917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.253603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.253711 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.254049 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:26.254102 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:26.753618 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.753694 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.253707 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.753801 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.754135 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.253730 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.253819 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.254157 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:28.254213 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:28.754062 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.754150 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.754428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.253246 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.753316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.753701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.753680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:30.753758 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.253283 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.753617 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.753891 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.253582 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.753873 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.753956 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.754335 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:32.754410 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:33.253082 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.253153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.253408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.753211 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.253332 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.253414 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.253813 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.753517 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.753595 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.753879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.253210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.253725 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:35.753393 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.753476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.753815 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.253180 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.253769 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.253245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.253568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.753118 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.753199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.753448 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:37.753489 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.253352 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.253435 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.253790 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.753633 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.753713 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.754052 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.253630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.253702 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.254026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.754056 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:39.754113 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.253723 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.253798 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.254106 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.754024 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.253834 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.253927 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.254334 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.754152 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.754231 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.754552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:41.754611 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:42.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.753201 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.253361 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.253440 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.753589 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.753665 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.253738 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.253820 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.254118 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.254169 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:44.753956 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.754034 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.754376 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.253875 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.253954 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.254382 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.753128 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.753548 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.253245 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.253330 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.753570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:46.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.253226 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.253657 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.753364 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.753750 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.253392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.753652 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.753737 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.754073 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:48.754130 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.253766 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.253847 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.254210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.753704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.253788 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.253862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.254182 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.753997 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.754076 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.754412 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:50.754497 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:51.253162 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.253230 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.753249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.753596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.753309 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.753387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.753660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.253234 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.253323 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.253702 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:53.253761 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:53.753655 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.753749 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.754112 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.253936 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.753647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.253232 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.253310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.753558 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:55.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:56.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.253610 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.753338 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.253172 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.253533 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.753301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.753667 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:57.753734 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:58.253323 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.753599 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.753674 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.253780 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.253867 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.254242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.754384 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.754441 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:00.261843 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.262054 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.262449 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.753175 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.253252 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.253251 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.253328 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.253683 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:02.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.753677 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.253422 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.753719 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.753793 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:04.254346 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.753969 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.253791 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.253873 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.254220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.753910 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.753984 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.754315 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.253622 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.253718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.254014 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.753813 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.753893 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.754190 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.754244 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:07.254039 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.254467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.753167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.753245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.753517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.253390 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.753749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.753834 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.754171 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.253662 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.253741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.254087 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:09.254142 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:09.753914 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.753986 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.754327 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.254174 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.254257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.254595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.753297 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.753370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.253408 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.253499 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.253838 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.753526 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.753601 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.753894 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:11.753949 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:12.253560 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.253632 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.753805 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.754169 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.253986 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.254075 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.254435 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.754109 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.754186 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.754492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:13.754550 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:14.253219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.253640 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.753663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.253555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.753256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.753575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:16.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.253273 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:16.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:16.753300 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.753651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.253388 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.253700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.753205 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:18.253374 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.253447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:18.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:18.753640 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.754063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.253884 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.253974 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.753695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:20.253726 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.253806 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.254124 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:20.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:20.753974 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.754048 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.754388 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.253158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.253440 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.253238 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.253660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:23.253247 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:23.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.253272 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.253348 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.253619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:25.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.253630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:25.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:25.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.753591 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.253192 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.253269 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.753310 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.753396 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.253223 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.253476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.753141 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.753220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:27.753593 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:28.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.253455 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.253810 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:28.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.753915 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.754546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.253345 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.753321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:29.753656 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:30.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.253247 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.253502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:30.753270 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.753355 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.753724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.753227 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.753572 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:32.253250 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:32.253716 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:32.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.253201 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.253278 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.753973 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:34.253749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.253821 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.254108 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:34.254159 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:34.753679 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.753775 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.253885 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.253959 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.754073 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.754148 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.754487 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.753378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.753774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:36.753831 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:37.253510 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.253591 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.253957 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:37.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.253981 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.254058 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.753581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:39.253264 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:39.253650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:39.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.253402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.253743 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.753429 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.753503 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.753767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:41.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:41.253687 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:41.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.753639 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.253183 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.253286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.253225 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.253637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.753531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:43.753579 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:44.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.253576 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:44.753186 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.753264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.753599 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.253295 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.253735 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.753614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:45.753668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:46.253322 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.253398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:46.753161 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.753496 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.253291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:47.753697 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:48.253324 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:48.753549 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.753624 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.253784 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.753644 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.753723 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.754017 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:49.754065 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:50.253810 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.254239 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:50.753899 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.753975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.754306 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.253987 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.753830 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.753910 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.754242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:51.754311 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:52.254071 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.254149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.254484 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:52.753615 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.753942 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.253691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.254010 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.753953 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.754027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.754345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:53.754402 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.253938 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:54.753755 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.753827 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.754137 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.253941 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.254028 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.254370 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.754085 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.754158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:55.754529 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:56.253088 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.253170 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.253491 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.753629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.253537 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.753221 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:58.253261 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.253670 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:58.253729 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:58.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.754036 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.253988 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.254385 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.753169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:00.255875 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.256036 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.256356 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:00.256590 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:00.753314 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.753406 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.753729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.253451 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.253526 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.253836 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.753275 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.753592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.253224 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.753389 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:02.753739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:03.253378 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.253737 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:03.753753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.753845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.754210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.253955 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.254035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.753974 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:04.754016 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:05.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.254027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:05.753103 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.753190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.753552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.253106 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.253183 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.253481 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.753270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.753579 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:07.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.253288 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.253606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:07.253665 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:07.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.753237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.753615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.253519 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.253592 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.253905 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.754029 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.754407 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:09.253610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.253927 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:09.253968 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:09.753621 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.253736 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.253811 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.254126 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.753682 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.753989 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:11.253783 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:11.254252 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:11.754017 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.754095 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.754418 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.253174 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.253431 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.753169 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.753584 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.253725 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.753680 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.753954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:13.753997 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:14.253722 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.253802 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.254151 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:14.753815 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.753891 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.754223 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.254029 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.753806 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.753888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.754227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:15.754287 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:16.254074 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.254151 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.254498 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:16.753147 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.753225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.753479 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.253581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.753255 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:18.253273 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.253344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.253604 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:18.253646 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:18.753564 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.753634 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.253242 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.753337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:20.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:20.253718 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:20.753425 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.753514 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.753897 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.253583 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.753341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.753692 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.253625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.753263 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.753343 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:23.253236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.253636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:23.753622 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.754022 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.253690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.753690 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.753765 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:24.754119 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:25.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.253969 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.254295 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:25.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.253798 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.253879 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.254195 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.754019 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.754098 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.754443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:26.754501 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.253228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:27.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.753266 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.253426 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.253518 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.253857 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.753672 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.753767 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:29.254090 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.254181 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.254562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:29.254618 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:29.753292 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.753381 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.753726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.253402 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.253471 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.753408 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.753487 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.753850 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.753559 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:31.753600 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:32.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:32.755552 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.755633 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.755956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.253924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.753903 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.753982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.754307 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:33.754366 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:34.254124 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.254211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.254539 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:34.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.753398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:36.253412 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.253489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.253839 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:36.253891 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:36.753198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.753274 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.253727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.753428 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.753500 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.753749 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:38.253689 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.253766 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.254125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:38.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:38.753984 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.754059 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.754410 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.253122 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.253198 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.253459 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.753151 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.753259 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.753585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.253413 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.253767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.753533 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:40.753859 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:41.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.253596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:41.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.753268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.753605 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.253303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:43.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.253419 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:43.254022 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:43.753920 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.754014 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.754333 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.253118 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.253201 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.253526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:45.255002 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.255152 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.255478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:45.255533 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.753317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.253364 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.253796 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.753240 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.753574 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.753402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.753748 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:47.753808 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:48.253313 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:48.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.754069 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.253753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.253830 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.254168 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.753643 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.753731 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.754066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:49.754148 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:50.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.253994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:50.753099 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.753189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.253251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.253515 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:52.253411 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.253511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.253890 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:52.253964 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:52.753645 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.753719 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.253775 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.254202 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.754104 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.754180 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.754506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.253165 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.253239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.253494 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:54.753683 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:55.253358 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.253438 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.253774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:55.753173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.253177 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.253263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.253600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.753404 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:56.753805 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:57.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.253238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.253497 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.253572 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.253908 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.753944 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:58.753983 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:59.253741 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.253823 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.254166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:59.753959 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.754035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.253101 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.253195 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.753249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.753333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:01.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.253476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.253809 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:01.253884 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:01.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.753357 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.253331 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.253412 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.253739 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.753476 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.753557 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.753921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:03.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:03.253961 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:03.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.253243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.753367 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.753380 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.753466 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.753795 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:05.753852 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:06.253248 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:06.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.253321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.753244 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.753502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:08.253274 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.253352 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.253726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:08.253781 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:08.753770 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.753843 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.754162 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.253596 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.253675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.253945 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.753821 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.753904 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.754197 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:10.254043 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.254442 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:10.254495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:10.753142 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.753213 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.753467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.753382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.753753 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.253233 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.753305 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:12.753685 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:13.253376 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.253460 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.253784 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:13.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.753691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.253819 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.253898 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.254259 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.754072 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.754149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.754478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:14.754538 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:15.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.253248 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.253513 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:15.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.253764 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.753262 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:17.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.253350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.253713 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:17.253779 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:17.753480 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.753569 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.253664 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.753923 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.754002 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.754397 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.253225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.753254 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:19.753636 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:20.253334 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:20.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:21.753680 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:22.253524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.254279 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:22.753625 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.753972 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.253809 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.253888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.254196 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.754021 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.754101 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.754439 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:23.754495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:24.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.253250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.253623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:24.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.753607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.253403 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.253757 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.753263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.753530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:26.253258 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.253351 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.253693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:26.253746 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:26.753414 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.753490 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.753826 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.253253 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.753244 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.753673 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:28.253404 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.253479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.253776 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:28.253819 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:28.753595 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.753935 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.253381 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.253465 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.253954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.753737 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.753815 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:30.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:30.253995 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:30.753581 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.753666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.753956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.253745 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.253824 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.254143 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.753606 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.754026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:32.253830 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.254262 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:32.254319 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:32.754091 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.754169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.754555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.253132 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.253222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.753524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.753608 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.753895 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.253698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.753696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.753951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:34.753991 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:35.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.253882 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.254227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:35.754036 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.754112 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.754409 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.253093 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.253164 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.253416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.753271 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:37.253294 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.253378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.253664 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:37.253713 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:37.753375 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.253304 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.253376 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.753592 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.754003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:39.253608 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.253678 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.253933 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:39.253982 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:39.753743 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.753818 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.754166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.253997 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.254396 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.753116 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.253645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.753348 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.753424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.753761 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:41.753817 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:42.265137 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.265218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.265549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:42.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.753653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.253379 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.253788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.753627 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.753708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:44.253832 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.254217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:44.754035 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.754111 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.754446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.253168 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.753612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:46.253316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.253442 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:46.253773 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:46.753434 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.753511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.753766 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.253200 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.253277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.253570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.753267 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.753344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.753625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:48.253552 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.253626 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.253879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:48.253930 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:48.753836 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.753911 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.754217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.254026 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.254106 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.753598 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.753686 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:50.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.254209 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:50.254259 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:50.754039 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.754125 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.253140 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.253209 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.253462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.753185 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.253618 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.753250 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.753598 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:52.753651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:53.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:53.753664 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.753741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.754081 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.253591 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.253669 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.254015 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.753866 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.753946 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.754274 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:54.754329 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:55.254057 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.254131 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.254446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:55.753121 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.753211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.753456 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.253253 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.253557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:57.253260 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:57.253672 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:57.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.753392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.253532 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.253606 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.253910 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:59.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.253799 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.254130 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:59.254190 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:59.753954 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.754031 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.754326 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.260359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.261314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.266189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.753848 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:01.253975 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.254047 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:01.254396 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:01.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.753698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.753979 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.253768 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.253842 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.254146 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.753881 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.754220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.253637 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.253710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.253982 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.753913 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.753997 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.754309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:03.754367 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:04.254115 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.254189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.254536 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:04.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.753161 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.753416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.253139 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.253218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.253585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.753162 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.753568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:06.253362 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.253441 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:06.253744 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.753378 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.753454 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.753700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:08.253293 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.253374 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.253718 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:08.253778 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:08.753537 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.753616 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.253654 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.253730 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.254027 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.753808 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.753883 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.754221 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:10.254047 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.254124 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.254490 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:10.254545 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:10.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.753567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.253638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.753650 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.253202 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.253270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.253527 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:12.753698 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:13.253182 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.253256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.253592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:13.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.753477 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.753394 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.753489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.753829 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:14.753892 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:15.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:15.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.753586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.253207 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.753176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.753503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:17.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:17.253694 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:17.753223 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.253586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.753702 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.753779 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.754110 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:19.253939 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.254018 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.254367 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:19.254421 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:19.754123 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.754196 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.754517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.253190 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.753740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.253432 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.253502 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.253792 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.753215 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.753636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:21.753701 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:22.253195 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.253276 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:22.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.253220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.253589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:24.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.253665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:24.253703 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:24.753359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.753447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.753788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.253481 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.253571 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.253917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.753617 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.753635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:26.753691 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:27.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:27.753345 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.753799 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.253586 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.253666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.253996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.753669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:28.753726 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:29.253176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:29.253235 1170766 node_ready.go:38] duration metric: took 6m0.000252571s for node "functional-389537" to be "Ready" ...
	I1217 00:55:29.256355 1170766 out.go:203] 
	W1217 00:55:29.259198 1170766 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:55:29.259223 1170766 out.go:285] * 
	W1217 00:55:29.261375 1170766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:55:29.264098 1170766 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:55:38 functional-389537 crio[5405]: time="2025-12-17T00:55:38.048031022Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4706890d-4ffd-4e66-b223-478f61947678 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.114719131Z" level=info msg="Checking image status: minikube-local-cache-test:functional-389537" id=9fe80880-0b65-492c-9f4c-0a010a90fb87 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.115307967Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.115357443Z" level=info msg="Image minikube-local-cache-test:functional-389537 not found" id=9fe80880-0b65-492c-9f4c-0a010a90fb87 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.115430007Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-389537 found" id=9fe80880-0b65-492c-9f4c-0a010a90fb87 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.139497123Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-389537" id=e1e97712-7b25-4c58-8ce9-e8ccc3d68083 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.139635048Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-389537 not found" id=e1e97712-7b25-4c58-8ce9-e8ccc3d68083 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.139677952Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-389537 found" id=e1e97712-7b25-4c58-8ce9-e8ccc3d68083 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.166406933Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-389537" id=6f08da1a-8932-466e-80c4-fbb2a61e7b9d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.166541256Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-389537 not found" id=6f08da1a-8932-466e-80c4-fbb2a61e7b9d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.166583856Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-389537 found" id=6f08da1a-8932-466e-80c4-fbb2a61e7b9d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.15548198Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8c29c4dc-94d8-4d41-85fa-eedb6c6e43bd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.478576539Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=af5f534c-bde6-4b85-8f73-bbe613827a8b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.478739227Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=af5f534c-bde6-4b85-8f73-bbe613827a8b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.47878291Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=af5f534c-bde6-4b85-8f73-bbe613827a8b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.080270662Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=6ecc199b-b5c8-4f3e-8700-8871f7347de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.080449283Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=6ecc199b-b5c8-4f3e-8700-8871f7347de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.080489529Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=6ecc199b-b5c8-4f3e-8700-8871f7347de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.105118388Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=95482d6c-ef1f-4238-b9c4-1a5ad9800e55 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.10527324Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=95482d6c-ef1f-4238-b9c4-1a5ad9800e55 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.105311869Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=95482d6c-ef1f-4238-b9c4-1a5ad9800e55 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.128969033Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f34309e3-3d43-4d8f-9779-28ad18957183 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.129133665Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f34309e3-3d43-4d8f-9779-28ad18957183 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.129186956Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f34309e3-3d43-4d8f-9779-28ad18957183 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.66083782Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3707f441-343b-453c-ac33-cd3630163eb1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:43.232846    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.233865    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.235452    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.235858    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:43.237377    9499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:55:43 up  6:38,  0 user,  load average: 0.32, 0.24, 0.70
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:55:40 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:41 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1145.
	Dec 17 00:55:41 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:41 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:41 functional-389537 kubelet[9372]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:41 functional-389537 kubelet[9372]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:41 functional-389537 kubelet[9372]: E1217 00:55:41.568806    9372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:41 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:41 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:42 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1146.
	Dec 17 00:55:42 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:42 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:42 functional-389537 kubelet[9407]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:42 functional-389537 kubelet[9407]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:42 functional-389537 kubelet[9407]: E1217 00:55:42.327342    9407 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:42 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:42 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:42 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1147.
	Dec 17 00:55:42 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:43 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:43 functional-389537 kubelet[9451]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:43 functional-389537 kubelet[9451]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:43 functional-389537 kubelet[9451]: E1217 00:55:43.073332    9451 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:43 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:43 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (323.975892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-389537 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-389537 get pods: exit status 1 (108.63513ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-389537 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (297.212581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 logs -n 25: (1.184723203s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-099267 image ls --format short --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ ssh     │ functional-099267 ssh pgrep buildkitd                                                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │                     │
	│ image   │ functional-099267 image ls --format yaml --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format json --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format table --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p functional-099267                                                                                                                              │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ start   │ -p functional-389537 --alsologtostderr -v=8                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:49 UTC │                     │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:latest                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add minikube-local-cache-test:functional-389537                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache delete minikube-local-cache-test:functional-389537                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl images                                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cache   │ functional-389537 cache reload                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ kubectl │ functional-389537 kubectl -- --context functional-389537 get pods                                                                                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:49:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:49:23.461389 1170766 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:49:23.461547 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461559 1170766 out.go:374] Setting ErrFile to fd 2...
	I1217 00:49:23.461579 1170766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:49:23.461900 1170766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:49:23.462303 1170766 out.go:368] Setting JSON to false
	I1217 00:49:23.463185 1170766 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23514,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:49:23.463289 1170766 start.go:143] virtualization:  
	I1217 00:49:23.466912 1170766 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:49:23.469855 1170766 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:49:23.469995 1170766 notify.go:221] Checking for updates...
	I1217 00:49:23.475916 1170766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:49:23.478779 1170766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:23.481739 1170766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:49:23.484668 1170766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:49:23.487521 1170766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:49:23.490907 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:23.491070 1170766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:49:23.524450 1170766 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:49:23.524610 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.580909 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.571176137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.581015 1170766 docker.go:319] overlay module found
	I1217 00:49:23.585845 1170766 out.go:179] * Using the docker driver based on existing profile
	I1217 00:49:23.588706 1170766 start.go:309] selected driver: docker
	I1217 00:49:23.588726 1170766 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.588842 1170766 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:49:23.588945 1170766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:49:23.644593 1170766 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 00:49:23.634960306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:49:23.645010 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:23.645070 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:23.645127 1170766 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:23.648351 1170766 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:49:23.651037 1170766 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:49:23.653878 1170766 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:49:23.656858 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:23.656904 1170766 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:49:23.656917 1170766 cache.go:65] Caching tarball of preloaded images
	I1217 00:49:23.656980 1170766 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:49:23.657013 1170766 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:49:23.657024 1170766 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:49:23.657126 1170766 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:49:23.675917 1170766 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:49:23.675939 1170766 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:49:23.675960 1170766 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:49:23.675991 1170766 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:49:23.676062 1170766 start.go:364] duration metric: took 47.228µs to acquireMachinesLock for "functional-389537"
	I1217 00:49:23.676087 1170766 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:49:23.676097 1170766 fix.go:54] fixHost starting: 
	I1217 00:49:23.676360 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:23.693660 1170766 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:49:23.693691 1170766 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:49:23.696944 1170766 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:49:23.696988 1170766 machine.go:94] provisionDockerMachine start ...
	I1217 00:49:23.697095 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.714561 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.714904 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.714921 1170766 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:49:23.856040 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:23.856064 1170766 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:49:23.856128 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:23.875306 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:23.875626 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:23.875637 1170766 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:49:24.024137 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:49:24.024222 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.043436 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.043770 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.043794 1170766 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:49:24.176920 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:49:24.176960 1170766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:49:24.176987 1170766 ubuntu.go:190] setting up certificates
	I1217 00:49:24.177005 1170766 provision.go:84] configureAuth start
	I1217 00:49:24.177076 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:24.194508 1170766 provision.go:143] copyHostCerts
	I1217 00:49:24.194553 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194603 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:49:24.194616 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:49:24.194693 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:49:24.194827 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194850 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:49:24.194859 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:49:24.194890 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:49:24.194946 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.194967 1170766 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:49:24.194975 1170766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:49:24.195000 1170766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:49:24.195062 1170766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:49:24.401567 1170766 provision.go:177] copyRemoteCerts
	I1217 00:49:24.401643 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:49:24.401688 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.419163 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:24.516584 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 00:49:24.516654 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:49:24.535526 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 00:49:24.535590 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:49:24.556116 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 00:49:24.556181 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:49:24.575533 1170766 provision.go:87] duration metric: took 398.504828ms to configureAuth
	I1217 00:49:24.575561 1170766 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:49:24.575753 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:24.575856 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.593152 1170766 main.go:143] libmachine: Using SSH client type: native
	I1217 00:49:24.593467 1170766 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:49:24.593486 1170766 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:49:24.914611 1170766 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:49:24.914655 1170766 machine.go:97] duration metric: took 1.217656857s to provisionDockerMachine
	I1217 00:49:24.914668 1170766 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:49:24.914681 1170766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:49:24.914755 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:49:24.914823 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:24.935845 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.036750 1170766 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:49:25.040402 1170766 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:49:25.040450 1170766 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:49:25.040457 1170766 command_runner.go:130] > VERSION_ID="12"
	I1217 00:49:25.040461 1170766 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:49:25.040466 1170766 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:49:25.040470 1170766 command_runner.go:130] > ID=debian
	I1217 00:49:25.040475 1170766 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:49:25.040479 1170766 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:49:25.040485 1170766 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:49:25.040531 1170766 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:49:25.040571 1170766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:49:25.040583 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:49:25.040642 1170766 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:49:25.040724 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:49:25.040736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 00:49:25.040812 1170766 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:49:25.040822 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> /etc/test/nested/copy/1136597/hosts
	I1217 00:49:25.040875 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:49:25.048565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:25.066116 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:49:25.083960 1170766 start.go:296] duration metric: took 169.276161ms for postStartSetup
	I1217 00:49:25.084042 1170766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:49:25.084089 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.101382 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.193085 1170766 command_runner.go:130] > 18%
	I1217 00:49:25.193644 1170766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:49:25.197890 1170766 command_runner.go:130] > 160G
	I1217 00:49:25.198395 1170766 fix.go:56] duration metric: took 1.522293417s for fixHost
	I1217 00:49:25.198422 1170766 start.go:83] releasing machines lock for "functional-389537", held for 1.522344181s
	I1217 00:49:25.198491 1170766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:49:25.216362 1170766 ssh_runner.go:195] Run: cat /version.json
	I1217 00:49:25.216396 1170766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:49:25.216449 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.216473 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:25.237434 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.266075 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:25.438053 1170766 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1217 00:49:25.438122 1170766 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:49:25.438253 1170766 ssh_runner.go:195] Run: systemctl --version
	I1217 00:49:25.444320 1170766 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:49:25.444367 1170766 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:49:25.444850 1170766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:49:25.480454 1170766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:49:25.484847 1170766 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:49:25.484904 1170766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:49:25.484962 1170766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:49:25.493012 1170766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:49:25.493039 1170766 start.go:496] detecting cgroup driver to use...
	I1217 00:49:25.493090 1170766 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:49:25.493156 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:49:25.508569 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:49:25.521635 1170766 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:49:25.521740 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:49:25.537766 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:49:25.551122 1170766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:49:25.669862 1170766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:49:25.789898 1170766 docker.go:234] disabling docker service ...
	I1217 00:49:25.789984 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:49:25.805401 1170766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:49:25.818559 1170766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:49:25.946131 1170766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:49:26.093460 1170766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:49:26.106879 1170766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:49:26.120278 1170766 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1217 00:49:26.121659 1170766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:49:26.121720 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.130856 1170766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:49:26.130968 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.140092 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.149223 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.158222 1170766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:49:26.166662 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.176047 1170766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.184976 1170766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.194179 1170766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:49:26.201960 1170766 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:49:26.202030 1170766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:49:26.209746 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.327753 1170766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:49:26.499257 1170766 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:49:26.499380 1170766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:49:26.502956 1170766 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1217 00:49:26.502992 1170766 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:49:26.503000 1170766 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1217 00:49:26.503008 1170766 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:26.503016 1170766 command_runner.go:130] > Access: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503022 1170766 command_runner.go:130] > Modify: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503035 1170766 command_runner.go:130] > Change: 2025-12-17 00:49:26.438312542 +0000
	I1217 00:49:26.503041 1170766 command_runner.go:130] >  Birth: -
	I1217 00:49:26.503359 1170766 start.go:564] Will wait 60s for crictl version
	I1217 00:49:26.503439 1170766 ssh_runner.go:195] Run: which crictl
	I1217 00:49:26.507311 1170766 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:49:26.507416 1170766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:49:26.531135 1170766 command_runner.go:130] > Version:  0.1.0
	I1217 00:49:26.531410 1170766 command_runner.go:130] > RuntimeName:  cri-o
	I1217 00:49:26.531606 1170766 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1217 00:49:26.531797 1170766 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:49:26.534036 1170766 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:49:26.534147 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.559497 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.559533 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.559539 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.559545 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.559550 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.559554 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.559558 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.559563 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.559567 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.559570 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.559574 1170766 command_runner.go:130] >      static
	I1217 00:49:26.559578 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.559582 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.559598 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.559608 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.559612 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.559615 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.559620 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.559632 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.559637 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.561572 1170766 ssh_runner.go:195] Run: crio --version
	I1217 00:49:26.587741 1170766 command_runner.go:130] > crio version 1.34.3
	I1217 00:49:26.587775 1170766 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1217 00:49:26.587782 1170766 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1217 00:49:26.587787 1170766 command_runner.go:130] >    GitTreeState:   dirty
	I1217 00:49:26.587793 1170766 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1217 00:49:26.587846 1170766 command_runner.go:130] >    GoVersion:      go1.24.6
	I1217 00:49:26.587858 1170766 command_runner.go:130] >    Compiler:       gc
	I1217 00:49:26.587864 1170766 command_runner.go:130] >    Platform:       linux/arm64
	I1217 00:49:26.587877 1170766 command_runner.go:130] >    Linkmode:       static
	I1217 00:49:26.587887 1170766 command_runner.go:130] >    BuildTags:
	I1217 00:49:26.587891 1170766 command_runner.go:130] >      static
	I1217 00:49:26.587894 1170766 command_runner.go:130] >      netgo
	I1217 00:49:26.587897 1170766 command_runner.go:130] >      osusergo
	I1217 00:49:26.587919 1170766 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1217 00:49:26.587929 1170766 command_runner.go:130] >      seccomp
	I1217 00:49:26.587935 1170766 command_runner.go:130] >      apparmor
	I1217 00:49:26.587950 1170766 command_runner.go:130] >      selinux
	I1217 00:49:26.587961 1170766 command_runner.go:130] >    LDFlags:          unknown
	I1217 00:49:26.587966 1170766 command_runner.go:130] >    SeccompEnabled:   true
	I1217 00:49:26.587971 1170766 command_runner.go:130] >    AppArmorEnabled:  false
	I1217 00:49:26.594651 1170766 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:49:26.597589 1170766 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:49:26.614215 1170766 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:49:26.618047 1170766 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1217 00:49:26.618237 1170766 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:49:26.618355 1170766 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:49:26.618425 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.651766 1170766 command_runner.go:130] > {
	I1217 00:49:26.651794 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.651799 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651810 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.651814 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651830 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.651837 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651841 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651850 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.651859 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.651866 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651870 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.651874 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651881 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651884 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651887 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651894 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.651901 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651911 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.651914 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651918 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.651926 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.651935 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.651948 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651953 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.651957 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.651963 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.651970 1170766 command_runner.go:130] >     },
	I1217 00:49:26.651973 1170766 command_runner.go:130] >     {
	I1217 00:49:26.651980 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.651986 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.651991 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.651994 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.651998 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652006 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.652014 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.652026 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652030 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.652034 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.652038 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652041 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652044 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652051 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.652057 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652062 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.652065 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652069 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652077 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.652087 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.652091 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652095 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.652106 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652118 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652122 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652131 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652135 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652156 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652165 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652183 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.652204 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652210 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.652215 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652219 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652227 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.652238 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.652242 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652246 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.652252 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652256 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652260 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652266 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652271 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652274 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652277 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652284 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.652289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652296 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.652302 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652305 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652313 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.652322 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.652329 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652333 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.652337 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652344 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652350 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652354 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652358 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652361 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652364 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652371 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.652379 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652407 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.652458 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652463 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652470 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.652478 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.652526 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652536 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.652557 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652564 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652567 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652570 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652577 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.652589 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652595 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.652598 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652605 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652615 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.652653 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.652661 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652666 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.652670 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652674 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.652677 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652681 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652689 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.652696 1170766 command_runner.go:130] >     },
	I1217 00:49:26.652702 1170766 command_runner.go:130] >     {
	I1217 00:49:26.652708 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.652712 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.652717 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.652722 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652726 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.652734 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.652741 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.652747 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.652751 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.652755 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.652761 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.652765 1170766 command_runner.go:130] >       },
	I1217 00:49:26.652775 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.652779 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.652782 1170766 command_runner.go:130] >     }
	I1217 00:49:26.652785 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.652790 1170766 command_runner.go:130] > }
	I1217 00:49:26.655303 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.655332 1170766 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:49:26.655388 1170766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:49:26.678896 1170766 command_runner.go:130] > {
	I1217 00:49:26.678916 1170766 command_runner.go:130] >   "images":  [
	I1217 00:49:26.678921 1170766 command_runner.go:130] >     {
	I1217 00:49:26.678929 1170766 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1217 00:49:26.678933 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.678939 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1217 00:49:26.678942 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678946 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.678958 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1217 00:49:26.678968 1170766 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1217 00:49:26.678972 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.678976 1170766 command_runner.go:130] >       "size":  "111333938",
	I1217 00:49:26.678980 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.678990 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679002 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679020 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679027 1170766 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1217 00:49:26.679030 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679036 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1217 00:49:26.679039 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679043 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679056 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1217 00:49:26.679065 1170766 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1217 00:49:26.679071 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679075 1170766 command_runner.go:130] >       "size":  "29037500",
	I1217 00:49:26.679079 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679091 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679098 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679101 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679107 1170766 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1217 00:49:26.679111 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679119 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1217 00:49:26.679122 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679127 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679135 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1217 00:49:26.679146 1170766 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1217 00:49:26.679149 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679153 1170766 command_runner.go:130] >       "size":  "74491780",
	I1217 00:49:26.679160 1170766 command_runner.go:130] >       "username":  "nonroot",
	I1217 00:49:26.679164 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679169 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679172 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679179 1170766 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1217 00:49:26.679185 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679190 1170766 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1217 00:49:26.679194 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679199 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679215 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1217 00:49:26.679225 1170766 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1217 00:49:26.679228 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679233 1170766 command_runner.go:130] >       "size":  "60857170",
	I1217 00:49:26.679239 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679243 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679249 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679257 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679264 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679268 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679271 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679277 1170766 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1217 00:49:26.679289 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679294 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1217 00:49:26.679297 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679301 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679309 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1217 00:49:26.679317 1170766 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1217 00:49:26.679328 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679333 1170766 command_runner.go:130] >       "size":  "84949999",
	I1217 00:49:26.679336 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679340 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679344 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679351 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679355 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679365 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679368 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679375 1170766 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1217 00:49:26.679378 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679387 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1217 00:49:26.679390 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679394 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679405 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1217 00:49:26.679419 1170766 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1217 00:49:26.679423 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679427 1170766 command_runner.go:130] >       "size":  "72170325",
	I1217 00:49:26.679438 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679442 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679445 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679449 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679455 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679459 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679462 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679471 1170766 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1217 00:49:26.679476 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679481 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1217 00:49:26.679486 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679491 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679501 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1217 00:49:26.679517 1170766 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1217 00:49:26.679521 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679525 1170766 command_runner.go:130] >       "size":  "74106775",
	I1217 00:49:26.679529 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679535 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679543 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679549 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679555 1170766 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1217 00:49:26.679560 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679568 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1217 00:49:26.679574 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679577 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679586 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1217 00:49:26.679605 1170766 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1217 00:49:26.679612 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679616 1170766 command_runner.go:130] >       "size":  "49822549",
	I1217 00:49:26.679619 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679626 1170766 command_runner.go:130] >         "value":  "0"
	I1217 00:49:26.679629 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679633 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679637 1170766 command_runner.go:130] >       "pinned":  false
	I1217 00:49:26.679640 1170766 command_runner.go:130] >     },
	I1217 00:49:26.679643 1170766 command_runner.go:130] >     {
	I1217 00:49:26.679649 1170766 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1217 00:49:26.679655 1170766 command_runner.go:130] >       "repoTags":  [
	I1217 00:49:26.679660 1170766 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.679672 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679676 1170766 command_runner.go:130] >       "repoDigests":  [
	I1217 00:49:26.679683 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1217 00:49:26.679691 1170766 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1217 00:49:26.679698 1170766 command_runner.go:130] >       ],
	I1217 00:49:26.679703 1170766 command_runner.go:130] >       "size":  "519884",
	I1217 00:49:26.679706 1170766 command_runner.go:130] >       "uid":  {
	I1217 00:49:26.679710 1170766 command_runner.go:130] >         "value":  "65535"
	I1217 00:49:26.679713 1170766 command_runner.go:130] >       },
	I1217 00:49:26.679717 1170766 command_runner.go:130] >       "username":  "",
	I1217 00:49:26.679721 1170766 command_runner.go:130] >       "pinned":  true
	I1217 00:49:26.679727 1170766 command_runner.go:130] >     }
	I1217 00:49:26.679730 1170766 command_runner.go:130] >   ]
	I1217 00:49:26.679735 1170766 command_runner.go:130] > }
	I1217 00:49:26.682128 1170766 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:49:26.682152 1170766 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:49:26.682160 1170766 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:49:26.682270 1170766 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:49:26.682351 1170766 ssh_runner.go:195] Run: crio config
	I1217 00:49:26.731730 1170766 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1217 00:49:26.731754 1170766 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1217 00:49:26.731761 1170766 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1217 00:49:26.731764 1170766 command_runner.go:130] > #
	I1217 00:49:26.731771 1170766 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1217 00:49:26.731778 1170766 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1217 00:49:26.731784 1170766 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1217 00:49:26.731801 1170766 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1217 00:49:26.731808 1170766 command_runner.go:130] > # reload'.
	I1217 00:49:26.731815 1170766 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1217 00:49:26.731836 1170766 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1217 00:49:26.731843 1170766 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1217 00:49:26.731849 1170766 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1217 00:49:26.731853 1170766 command_runner.go:130] > [crio]
	I1217 00:49:26.731859 1170766 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1217 00:49:26.731866 1170766 command_runner.go:130] > # containers images, in this directory.
	I1217 00:49:26.732568 1170766 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1217 00:49:26.732592 1170766 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1217 00:49:26.733157 1170766 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1217 00:49:26.733176 1170766 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1217 00:49:26.733597 1170766 command_runner.go:130] > # imagestore = ""
	I1217 00:49:26.733614 1170766 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1217 00:49:26.733623 1170766 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1217 00:49:26.734179 1170766 command_runner.go:130] > # storage_driver = "overlay"
	I1217 00:49:26.734196 1170766 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1217 00:49:26.734204 1170766 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1217 00:49:26.734478 1170766 command_runner.go:130] > # storage_option = [
	I1217 00:49:26.734782 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.734798 1170766 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1217 00:49:26.734807 1170766 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1217 00:49:26.735378 1170766 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1217 00:49:26.735394 1170766 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1217 00:49:26.735411 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1217 00:49:26.735422 1170766 command_runner.go:130] > # always happen on a node reboot
	I1217 00:49:26.735984 1170766 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1217 00:49:26.736023 1170766 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1217 00:49:26.736036 1170766 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1217 00:49:26.736041 1170766 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1217 00:49:26.736536 1170766 command_runner.go:130] > # version_file_persist = ""
	I1217 00:49:26.736561 1170766 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1217 00:49:26.736570 1170766 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1217 00:49:26.737150 1170766 command_runner.go:130] > # internal_wipe = true
	I1217 00:49:26.737173 1170766 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1217 00:49:26.737180 1170766 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1217 00:49:26.737739 1170766 command_runner.go:130] > # internal_repair = true
	I1217 00:49:26.737758 1170766 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1217 00:49:26.737766 1170766 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1217 00:49:26.737772 1170766 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1217 00:49:26.738332 1170766 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1217 00:49:26.738352 1170766 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1217 00:49:26.738356 1170766 command_runner.go:130] > [crio.api]
	I1217 00:49:26.738361 1170766 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1217 00:49:26.738921 1170766 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1217 00:49:26.738940 1170766 command_runner.go:130] > # IP address on which the stream server will listen.
	I1217 00:49:26.739496 1170766 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1217 00:49:26.739517 1170766 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1217 00:49:26.739523 1170766 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1217 00:49:26.740074 1170766 command_runner.go:130] > # stream_port = "0"
	I1217 00:49:26.740093 1170766 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1217 00:49:26.740679 1170766 command_runner.go:130] > # stream_enable_tls = false
	I1217 00:49:26.740700 1170766 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1217 00:49:26.741116 1170766 command_runner.go:130] > # stream_idle_timeout = ""
	I1217 00:49:26.741133 1170766 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1217 00:49:26.741147 1170766 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1217 00:49:26.741613 1170766 command_runner.go:130] > # stream_tls_cert = ""
	I1217 00:49:26.741629 1170766 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1217 00:49:26.741636 1170766 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1217 00:49:26.742076 1170766 command_runner.go:130] > # stream_tls_key = ""
	I1217 00:49:26.742092 1170766 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1217 00:49:26.742107 1170766 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1217 00:49:26.742117 1170766 command_runner.go:130] > # automatically pick up the changes.
	I1217 00:49:26.742632 1170766 command_runner.go:130] > # stream_tls_ca = ""
	I1217 00:49:26.742675 1170766 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743308 1170766 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1217 00:49:26.743331 1170766 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1217 00:49:26.743950 1170766 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1217 00:49:26.743971 1170766 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1217 00:49:26.743978 1170766 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1217 00:49:26.743981 1170766 command_runner.go:130] > [crio.runtime]
	I1217 00:49:26.743988 1170766 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1217 00:49:26.743996 1170766 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1217 00:49:26.744007 1170766 command_runner.go:130] > # "nofile=1024:2048"
	I1217 00:49:26.744021 1170766 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1217 00:49:26.744329 1170766 command_runner.go:130] > # default_ulimits = [
	I1217 00:49:26.744680 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.744702 1170766 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1217 00:49:26.745338 1170766 command_runner.go:130] > # no_pivot = false
	I1217 00:49:26.745359 1170766 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1217 00:49:26.745367 1170766 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1217 00:49:26.745979 1170766 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1217 00:49:26.746000 1170766 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1217 00:49:26.746006 1170766 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1217 00:49:26.746013 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.746484 1170766 command_runner.go:130] > # conmon = ""
	I1217 00:49:26.746503 1170766 command_runner.go:130] > # Cgroup setting for conmon
	I1217 00:49:26.746512 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1217 00:49:26.746837 1170766 command_runner.go:130] > conmon_cgroup = "pod"
	I1217 00:49:26.746859 1170766 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1217 00:49:26.746866 1170766 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1217 00:49:26.746875 1170766 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1217 00:49:26.747181 1170766 command_runner.go:130] > # conmon_env = [
	I1217 00:49:26.747508 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.747529 1170766 command_runner.go:130] > # Additional environment variables to set for all the
	I1217 00:49:26.747536 1170766 command_runner.go:130] > # containers. These are overridden if set in the
	I1217 00:49:26.747545 1170766 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1217 00:49:26.747848 1170766 command_runner.go:130] > # default_env = [
	I1217 00:49:26.748185 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.748200 1170766 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1217 00:49:26.748210 1170766 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1217 00:49:26.750925 1170766 command_runner.go:130] > # selinux = false
	I1217 00:49:26.750948 1170766 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1217 00:49:26.750958 1170766 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1217 00:49:26.750964 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.751661 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.751677 1170766 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1217 00:49:26.751683 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752150 1170766 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1217 00:49:26.752167 1170766 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1217 00:49:26.752181 1170766 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1217 00:49:26.752191 1170766 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1217 00:49:26.752216 1170766 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1217 00:49:26.752224 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.752873 1170766 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1217 00:49:26.752894 1170766 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1217 00:49:26.752932 1170766 command_runner.go:130] > # the cgroup blockio controller.
	I1217 00:49:26.753417 1170766 command_runner.go:130] > # blockio_config_file = ""
	I1217 00:49:26.753438 1170766 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1217 00:49:26.753444 1170766 command_runner.go:130] > # blockio parameters.
	I1217 00:49:26.754055 1170766 command_runner.go:130] > # blockio_reload = false
	I1217 00:49:26.754079 1170766 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1217 00:49:26.754084 1170766 command_runner.go:130] > # irqbalance daemon.
	I1217 00:49:26.754673 1170766 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1217 00:49:26.754692 1170766 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1217 00:49:26.754700 1170766 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1217 00:49:26.754708 1170766 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1217 00:49:26.755498 1170766 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1217 00:49:26.755515 1170766 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1217 00:49:26.755521 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.756018 1170766 command_runner.go:130] > # rdt_config_file = ""
	I1217 00:49:26.756034 1170766 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1217 00:49:26.756360 1170766 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1217 00:49:26.756381 1170766 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1217 00:49:26.756895 1170766 command_runner.go:130] > # separate_pull_cgroup = ""
	I1217 00:49:26.756917 1170766 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1217 00:49:26.756925 1170766 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1217 00:49:26.756935 1170766 command_runner.go:130] > # will be added.
	I1217 00:49:26.757272 1170766 command_runner.go:130] > # default_capabilities = [
	I1217 00:49:26.757675 1170766 command_runner.go:130] > # 	"CHOWN",
	I1217 00:49:26.758010 1170766 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1217 00:49:26.758348 1170766 command_runner.go:130] > # 	"FSETID",
	I1217 00:49:26.758682 1170766 command_runner.go:130] > # 	"FOWNER",
	I1217 00:49:26.759200 1170766 command_runner.go:130] > # 	"SETGID",
	I1217 00:49:26.759214 1170766 command_runner.go:130] > # 	"SETUID",
	I1217 00:49:26.759238 1170766 command_runner.go:130] > # 	"SETPCAP",
	I1217 00:49:26.759246 1170766 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1217 00:49:26.759249 1170766 command_runner.go:130] > # 	"KILL",
	I1217 00:49:26.759253 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759261 1170766 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1217 00:49:26.759273 1170766 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1217 00:49:26.759278 1170766 command_runner.go:130] > # add_inheritable_capabilities = false
	I1217 00:49:26.759290 1170766 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1217 00:49:26.759297 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759305 1170766 command_runner.go:130] > default_sysctls = [
	I1217 00:49:26.759310 1170766 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1217 00:49:26.759312 1170766 command_runner.go:130] > ]
	I1217 00:49:26.759317 1170766 command_runner.go:130] > # List of devices on the host that a
	I1217 00:49:26.759323 1170766 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1217 00:49:26.759327 1170766 command_runner.go:130] > # allowed_devices = [
	I1217 00:49:26.759331 1170766 command_runner.go:130] > # 	"/dev/fuse",
	I1217 00:49:26.759338 1170766 command_runner.go:130] > # 	"/dev/net/tun",
	I1217 00:49:26.759341 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759347 1170766 command_runner.go:130] > # List of additional devices. specified as
	I1217 00:49:26.759358 1170766 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1217 00:49:26.759363 1170766 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1217 00:49:26.759373 1170766 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1217 00:49:26.759377 1170766 command_runner.go:130] > # additional_devices = [
	I1217 00:49:26.759380 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759386 1170766 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1217 00:49:26.759396 1170766 command_runner.go:130] > # cdi_spec_dirs = [
	I1217 00:49:26.759406 1170766 command_runner.go:130] > # 	"/etc/cdi",
	I1217 00:49:26.759411 1170766 command_runner.go:130] > # 	"/var/run/cdi",
	I1217 00:49:26.759414 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759421 1170766 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1217 00:49:26.759446 1170766 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1217 00:49:26.759454 1170766 command_runner.go:130] > # Defaults to false.
	I1217 00:49:26.759459 1170766 command_runner.go:130] > # device_ownership_from_security_context = false
	I1217 00:49:26.759466 1170766 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1217 00:49:26.759476 1170766 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1217 00:49:26.759480 1170766 command_runner.go:130] > # hooks_dir = [
	I1217 00:49:26.759486 1170766 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1217 00:49:26.759490 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.759496 1170766 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1217 00:49:26.759505 1170766 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1217 00:49:26.759511 1170766 command_runner.go:130] > # its default mounts from the following two files:
	I1217 00:49:26.759515 1170766 command_runner.go:130] > #
	I1217 00:49:26.759522 1170766 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1217 00:49:26.759532 1170766 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1217 00:49:26.759537 1170766 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1217 00:49:26.759540 1170766 command_runner.go:130] > #
	I1217 00:49:26.759546 1170766 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1217 00:49:26.759556 1170766 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1217 00:49:26.759563 1170766 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1217 00:49:26.759569 1170766 command_runner.go:130] > #      only add mounts it finds in this file.
	I1217 00:49:26.759578 1170766 command_runner.go:130] > #
	I1217 00:49:26.759582 1170766 command_runner.go:130] > # default_mounts_file = ""
	I1217 00:49:26.759588 1170766 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1217 00:49:26.759595 1170766 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1217 00:49:26.759599 1170766 command_runner.go:130] > # pids_limit = -1
	I1217 00:49:26.759609 1170766 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1217 00:49:26.759619 1170766 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1217 00:49:26.759625 1170766 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1217 00:49:26.759634 1170766 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1217 00:49:26.759644 1170766 command_runner.go:130] > # log_size_max = -1
	I1217 00:49:26.759653 1170766 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1217 00:49:26.759660 1170766 command_runner.go:130] > # log_to_journald = false
	I1217 00:49:26.759666 1170766 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1217 00:49:26.759671 1170766 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1217 00:49:26.759676 1170766 command_runner.go:130] > # Path to directory for container attach sockets.
	I1217 00:49:26.759681 1170766 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1217 00:49:26.759686 1170766 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1217 00:49:26.759694 1170766 command_runner.go:130] > # bind_mount_prefix = ""
	I1217 00:49:26.759700 1170766 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1217 00:49:26.759704 1170766 command_runner.go:130] > # read_only = false
	I1217 00:49:26.759714 1170766 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1217 00:49:26.759721 1170766 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1217 00:49:26.759725 1170766 command_runner.go:130] > # live configuration reload.
	I1217 00:49:26.759734 1170766 command_runner.go:130] > # log_level = "info"
	I1217 00:49:26.759741 1170766 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1217 00:49:26.759762 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.759770 1170766 command_runner.go:130] > # log_filter = ""
	I1217 00:49:26.759776 1170766 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759782 1170766 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1217 00:49:26.759790 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759801 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.759809 1170766 command_runner.go:130] > # uid_mappings = ""
	I1217 00:49:26.759815 1170766 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1217 00:49:26.759821 1170766 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1217 00:49:26.759825 1170766 command_runner.go:130] > # separated by comma.
	I1217 00:49:26.759833 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761229 1170766 command_runner.go:130] > # gid_mappings = ""
	I1217 00:49:26.761253 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1217 00:49:26.761260 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761266 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761274 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.761925 1170766 command_runner.go:130] > # minimum_mappable_uid = -1
	I1217 00:49:26.761952 1170766 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1217 00:49:26.761960 1170766 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1217 00:49:26.761966 1170766 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1217 00:49:26.761974 1170766 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1217 00:49:26.762609 1170766 command_runner.go:130] > # minimum_mappable_gid = -1
	I1217 00:49:26.762630 1170766 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1217 00:49:26.762637 1170766 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1217 00:49:26.762643 1170766 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1217 00:49:26.763842 1170766 command_runner.go:130] > # ctr_stop_timeout = 30
	I1217 00:49:26.763856 1170766 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1217 00:49:26.763864 1170766 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1217 00:49:26.763869 1170766 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1217 00:49:26.763873 1170766 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1217 00:49:26.763878 1170766 command_runner.go:130] > # drop_infra_ctr = true
	I1217 00:49:26.763885 1170766 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1217 00:49:26.763900 1170766 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1217 00:49:26.763909 1170766 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1217 00:49:26.763919 1170766 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1217 00:49:26.763926 1170766 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1217 00:49:26.763932 1170766 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1217 00:49:26.763938 1170766 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1217 00:49:26.763943 1170766 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1217 00:49:26.763947 1170766 command_runner.go:130] > # shared_cpuset = ""
	I1217 00:49:26.763953 1170766 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1217 00:49:26.763958 1170766 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1217 00:49:26.763963 1170766 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1217 00:49:26.763976 1170766 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1217 00:49:26.763980 1170766 command_runner.go:130] > # pinns_path = ""
	I1217 00:49:26.763986 1170766 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1217 00:49:26.764001 1170766 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1217 00:49:26.764011 1170766 command_runner.go:130] > # enable_criu_support = true
	I1217 00:49:26.764017 1170766 command_runner.go:130] > # Enable/disable the generation of the container,
	I1217 00:49:26.764022 1170766 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1217 00:49:26.764027 1170766 command_runner.go:130] > # enable_pod_events = false
	I1217 00:49:26.764033 1170766 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1217 00:49:26.764043 1170766 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1217 00:49:26.764047 1170766 command_runner.go:130] > # default_runtime = "crun"
	I1217 00:49:26.764053 1170766 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1217 00:49:26.764064 1170766 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1217 00:49:26.764077 1170766 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1217 00:49:26.764086 1170766 command_runner.go:130] > # creation as a file is not desired either.
	I1217 00:49:26.764094 1170766 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1217 00:49:26.764101 1170766 command_runner.go:130] > # the hostname is being managed dynamically.
	I1217 00:49:26.764105 1170766 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1217 00:49:26.764108 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.764115 1170766 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1217 00:49:26.764124 1170766 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1217 00:49:26.764131 1170766 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1217 00:49:26.764141 1170766 command_runner.go:130] > # Each entry in the table should follow the format:
	I1217 00:49:26.764144 1170766 command_runner.go:130] > #
	I1217 00:49:26.764149 1170766 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1217 00:49:26.764154 1170766 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1217 00:49:26.764162 1170766 command_runner.go:130] > # runtime_type = "oci"
	I1217 00:49:26.764167 1170766 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1217 00:49:26.764172 1170766 command_runner.go:130] > # inherit_default_runtime = false
	I1217 00:49:26.764194 1170766 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1217 00:49:26.764203 1170766 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1217 00:49:26.764208 1170766 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1217 00:49:26.764212 1170766 command_runner.go:130] > # monitor_env = []
	I1217 00:49:26.764217 1170766 command_runner.go:130] > # privileged_without_host_devices = false
	I1217 00:49:26.764225 1170766 command_runner.go:130] > # allowed_annotations = []
	I1217 00:49:26.764231 1170766 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1217 00:49:26.764239 1170766 command_runner.go:130] > # no_sync_log = false
	I1217 00:49:26.764246 1170766 command_runner.go:130] > # default_annotations = {}
	I1217 00:49:26.764250 1170766 command_runner.go:130] > # stream_websockets = false
	I1217 00:49:26.764254 1170766 command_runner.go:130] > # seccomp_profile = ""
	I1217 00:49:26.764304 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.764313 1170766 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1217 00:49:26.764320 1170766 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1217 00:49:26.764331 1170766 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1217 00:49:26.764338 1170766 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1217 00:49:26.764341 1170766 command_runner.go:130] > #   in $PATH.
	I1217 00:49:26.764347 1170766 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1217 00:49:26.764352 1170766 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1217 00:49:26.764359 1170766 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1217 00:49:26.764366 1170766 command_runner.go:130] > #   state.
	I1217 00:49:26.764376 1170766 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1217 00:49:26.764387 1170766 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1217 00:49:26.764393 1170766 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1217 00:49:26.764400 1170766 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1217 00:49:26.764409 1170766 command_runner.go:130] > #   the values from the default runtime on load time.
	I1217 00:49:26.764454 1170766 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1217 00:49:26.764462 1170766 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1217 00:49:26.764468 1170766 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1217 00:49:26.764475 1170766 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1217 00:49:26.764480 1170766 command_runner.go:130] > #   The currently recognized values are:
	I1217 00:49:26.764486 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1217 00:49:26.764494 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1217 00:49:26.764504 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1217 00:49:26.764515 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1217 00:49:26.764524 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1217 00:49:26.764532 1170766 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1217 00:49:26.764539 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1217 00:49:26.764554 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1217 00:49:26.764565 1170766 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1217 00:49:26.764575 1170766 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1217 00:49:26.764586 1170766 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1217 00:49:26.764592 1170766 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1217 00:49:26.764599 1170766 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1217 00:49:26.764605 1170766 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1217 00:49:26.764611 1170766 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1217 00:49:26.764620 1170766 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1217 00:49:26.764629 1170766 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1217 00:49:26.764634 1170766 command_runner.go:130] > #   deprecated option "conmon".
	I1217 00:49:26.764642 1170766 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1217 00:49:26.764650 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1217 00:49:26.764658 1170766 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1217 00:49:26.764668 1170766 command_runner.go:130] > #   should be moved to the container's cgroup
	I1217 00:49:26.764675 1170766 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1217 00:49:26.764680 1170766 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1217 00:49:26.764688 1170766 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1217 00:49:26.764692 1170766 command_runner.go:130] > #   conmon-rs by using:
	I1217 00:49:26.764705 1170766 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1217 00:49:26.764713 1170766 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1217 00:49:26.764724 1170766 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1217 00:49:26.764731 1170766 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1217 00:49:26.764740 1170766 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1217 00:49:26.764747 1170766 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1217 00:49:26.764755 1170766 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1217 00:49:26.764760 1170766 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1217 00:49:26.764769 1170766 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1217 00:49:26.764778 1170766 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1217 00:49:26.764783 1170766 command_runner.go:130] > #   when a machine crash happens.
	I1217 00:49:26.764794 1170766 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1217 00:49:26.764803 1170766 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1217 00:49:26.764814 1170766 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1217 00:49:26.764819 1170766 command_runner.go:130] > #   seccomp profile for the runtime.
	I1217 00:49:26.764831 1170766 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1217 00:49:26.764843 1170766 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1217 00:49:26.764845 1170766 command_runner.go:130] > #
	I1217 00:49:26.764850 1170766 command_runner.go:130] > # Using the seccomp notifier feature:
	I1217 00:49:26.764853 1170766 command_runner.go:130] > #
	I1217 00:49:26.764859 1170766 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1217 00:49:26.764870 1170766 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1217 00:49:26.764873 1170766 command_runner.go:130] > #
	I1217 00:49:26.764881 1170766 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1217 00:49:26.764890 1170766 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1217 00:49:26.764894 1170766 command_runner.go:130] > #
	I1217 00:49:26.764900 1170766 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1217 00:49:26.764907 1170766 command_runner.go:130] > # feature.
	I1217 00:49:26.764910 1170766 command_runner.go:130] > #
	I1217 00:49:26.764916 1170766 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1217 00:49:26.764922 1170766 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1217 00:49:26.764928 1170766 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1217 00:49:26.764934 1170766 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1217 00:49:26.764944 1170766 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1217 00:49:26.764947 1170766 command_runner.go:130] > #
	I1217 00:49:26.764953 1170766 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1217 00:49:26.764963 1170766 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1217 00:49:26.764966 1170766 command_runner.go:130] > #
	I1217 00:49:26.764972 1170766 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1217 00:49:26.764981 1170766 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1217 00:49:26.764984 1170766 command_runner.go:130] > #
	I1217 00:49:26.764991 1170766 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1217 00:49:26.764997 1170766 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1217 00:49:26.765000 1170766 command_runner.go:130] > # limitation.
	I1217 00:49:26.765005 1170766 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1217 00:49:26.765010 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1217 00:49:26.765015 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765019 1170766 command_runner.go:130] > runtime_root = "/run/crun"
	I1217 00:49:26.765028 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765047 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765056 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765061 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765065 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765069 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765073 1170766 command_runner.go:130] > allowed_annotations = [
	I1217 00:49:26.765077 1170766 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1217 00:49:26.765080 1170766 command_runner.go:130] > ]
	I1217 00:49:26.765084 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765089 1170766 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1217 00:49:26.765093 1170766 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1217 00:49:26.765096 1170766 command_runner.go:130] > runtime_type = ""
	I1217 00:49:26.765101 1170766 command_runner.go:130] > runtime_root = "/run/runc"
	I1217 00:49:26.765110 1170766 command_runner.go:130] > inherit_default_runtime = false
	I1217 00:49:26.765114 1170766 command_runner.go:130] > runtime_config_path = ""
	I1217 00:49:26.765119 1170766 command_runner.go:130] > container_min_memory = ""
	I1217 00:49:26.765124 1170766 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1217 00:49:26.765132 1170766 command_runner.go:130] > monitor_cgroup = "pod"
	I1217 00:49:26.765136 1170766 command_runner.go:130] > monitor_exec_cgroup = ""
	I1217 00:49:26.765141 1170766 command_runner.go:130] > privileged_without_host_devices = false
	I1217 00:49:26.765148 1170766 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1217 00:49:26.765158 1170766 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1217 00:49:26.765165 1170766 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1217 00:49:26.765173 1170766 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1217 00:49:26.765184 1170766 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1217 00:49:26.765195 1170766 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1217 00:49:26.765205 1170766 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1217 00:49:26.765212 1170766 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1217 00:49:26.765226 1170766 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1217 00:49:26.765235 1170766 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1217 00:49:26.765244 1170766 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1217 00:49:26.765251 1170766 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1217 00:49:26.765254 1170766 command_runner.go:130] > # Example:
	I1217 00:49:26.765266 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1217 00:49:26.765271 1170766 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1217 00:49:26.765283 1170766 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1217 00:49:26.765288 1170766 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1217 00:49:26.765297 1170766 command_runner.go:130] > # cpuset = "0-1"
	I1217 00:49:26.765301 1170766 command_runner.go:130] > # cpushares = "5"
	I1217 00:49:26.765305 1170766 command_runner.go:130] > # cpuquota = "1000"
	I1217 00:49:26.765309 1170766 command_runner.go:130] > # cpuperiod = "100000"
	I1217 00:49:26.765312 1170766 command_runner.go:130] > # cpulimit = "35"
	I1217 00:49:26.765317 1170766 command_runner.go:130] > # Where:
	I1217 00:49:26.765321 1170766 command_runner.go:130] > # The workload name is workload-type.
	I1217 00:49:26.765337 1170766 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1217 00:49:26.765342 1170766 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1217 00:49:26.765348 1170766 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1217 00:49:26.765357 1170766 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1217 00:49:26.765362 1170766 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1217 00:49:26.765372 1170766 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1217 00:49:26.765378 1170766 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1217 00:49:26.765388 1170766 command_runner.go:130] > # Default value is set to true
	I1217 00:49:26.765392 1170766 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1217 00:49:26.765399 1170766 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1217 00:49:26.765404 1170766 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1217 00:49:26.765413 1170766 command_runner.go:130] > # Default value is set to 'false'
	I1217 00:49:26.765417 1170766 command_runner.go:130] > # disable_hostport_mapping = false
	I1217 00:49:26.765422 1170766 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1217 00:49:26.765431 1170766 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1217 00:49:26.765434 1170766 command_runner.go:130] > # timezone = ""
	I1217 00:49:26.765440 1170766 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1217 00:49:26.765444 1170766 command_runner.go:130] > #
	I1217 00:49:26.765450 1170766 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1217 00:49:26.765460 1170766 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1217 00:49:26.765464 1170766 command_runner.go:130] > [crio.image]
	I1217 00:49:26.765470 1170766 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1217 00:49:26.765481 1170766 command_runner.go:130] > # default_transport = "docker://"
	I1217 00:49:26.765487 1170766 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1217 00:49:26.765498 1170766 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765502 1170766 command_runner.go:130] > # global_auth_file = ""
	I1217 00:49:26.765506 1170766 command_runner.go:130] > # The image used to instantiate infra containers.
	I1217 00:49:26.765512 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765517 1170766 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1217 00:49:26.765523 1170766 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1217 00:49:26.765536 1170766 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1217 00:49:26.765541 1170766 command_runner.go:130] > # This option supports live configuration reload.
	I1217 00:49:26.765550 1170766 command_runner.go:130] > # pause_image_auth_file = ""
	I1217 00:49:26.765556 1170766 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1217 00:49:26.765562 1170766 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1217 00:49:26.765574 1170766 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1217 00:49:26.765580 1170766 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1217 00:49:26.765583 1170766 command_runner.go:130] > # pause_command = "/pause"
	I1217 00:49:26.765589 1170766 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1217 00:49:26.765595 1170766 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1217 00:49:26.765606 1170766 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1217 00:49:26.765612 1170766 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1217 00:49:26.765624 1170766 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1217 00:49:26.765630 1170766 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1217 00:49:26.765638 1170766 command_runner.go:130] > # pinned_images = [
	I1217 00:49:26.765641 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765647 1170766 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1217 00:49:26.765654 1170766 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1217 00:49:26.765667 1170766 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1217 00:49:26.765673 1170766 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1217 00:49:26.765682 1170766 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1217 00:49:26.765687 1170766 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1217 00:49:26.765692 1170766 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1217 00:49:26.765703 1170766 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1217 00:49:26.765709 1170766 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1217 00:49:26.765722 1170766 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1217 00:49:26.765729 1170766 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1217 00:49:26.765738 1170766 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1217 00:49:26.765749 1170766 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1217 00:49:26.765755 1170766 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1217 00:49:26.765762 1170766 command_runner.go:130] > # changing them here.
	I1217 00:49:26.765771 1170766 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1217 00:49:26.765775 1170766 command_runner.go:130] > # insecure_registries = [
	I1217 00:49:26.765778 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765785 1170766 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1217 00:49:26.765793 1170766 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1217 00:49:26.765799 1170766 command_runner.go:130] > # image_volumes = "mkdir"
	I1217 00:49:26.765805 1170766 command_runner.go:130] > # Temporary directory to use for storing big files
	I1217 00:49:26.765813 1170766 command_runner.go:130] > # big_files_temporary_dir = ""
	I1217 00:49:26.765819 1170766 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1217 00:49:26.765831 1170766 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1217 00:49:26.765835 1170766 command_runner.go:130] > # auto_reload_registries = false
	I1217 00:49:26.765842 1170766 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1217 00:49:26.765854 1170766 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1217 00:49:26.765860 1170766 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1217 00:49:26.765868 1170766 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1217 00:49:26.765872 1170766 command_runner.go:130] > # The mode of short name resolution.
	I1217 00:49:26.765879 1170766 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1217 00:49:26.765891 1170766 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1217 00:49:26.765899 1170766 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1217 00:49:26.765908 1170766 command_runner.go:130] > # short_name_mode = "enforcing"
	I1217 00:49:26.765914 1170766 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1217 00:49:26.765920 1170766 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1217 00:49:26.765924 1170766 command_runner.go:130] > # oci_artifact_mount_support = true
	I1217 00:49:26.765930 1170766 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1217 00:49:26.765933 1170766 command_runner.go:130] > # CNI plugins.
	I1217 00:49:26.765942 1170766 command_runner.go:130] > [crio.network]
	I1217 00:49:26.765948 1170766 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1217 00:49:26.765958 1170766 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1217 00:49:26.765965 1170766 command_runner.go:130] > # cni_default_network = ""
	I1217 00:49:26.765972 1170766 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1217 00:49:26.765976 1170766 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1217 00:49:26.765982 1170766 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1217 00:49:26.765989 1170766 command_runner.go:130] > # plugin_dirs = [
	I1217 00:49:26.765992 1170766 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1217 00:49:26.765995 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.765999 1170766 command_runner.go:130] > # List of included pod metrics.
	I1217 00:49:26.766003 1170766 command_runner.go:130] > # included_pod_metrics = [
	I1217 00:49:26.766006 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766012 1170766 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1217 00:49:26.766015 1170766 command_runner.go:130] > [crio.metrics]
	I1217 00:49:26.766020 1170766 command_runner.go:130] > # Globally enable or disable metrics support.
	I1217 00:49:26.766031 1170766 command_runner.go:130] > # enable_metrics = false
	I1217 00:49:26.766037 1170766 command_runner.go:130] > # Specify enabled metrics collectors.
	I1217 00:49:26.766046 1170766 command_runner.go:130] > # Per default all metrics are enabled.
	I1217 00:49:26.766053 1170766 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1217 00:49:26.766061 1170766 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1217 00:49:26.766070 1170766 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1217 00:49:26.766074 1170766 command_runner.go:130] > # metrics_collectors = [
	I1217 00:49:26.766078 1170766 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1217 00:49:26.766083 1170766 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1217 00:49:26.766087 1170766 command_runner.go:130] > # 	"containers_oom_total",
	I1217 00:49:26.766090 1170766 command_runner.go:130] > # 	"processes_defunct",
	I1217 00:49:26.766094 1170766 command_runner.go:130] > # 	"operations_total",
	I1217 00:49:26.766099 1170766 command_runner.go:130] > # 	"operations_latency_seconds",
	I1217 00:49:26.766103 1170766 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1217 00:49:26.766107 1170766 command_runner.go:130] > # 	"operations_errors_total",
	I1217 00:49:26.766111 1170766 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1217 00:49:26.766116 1170766 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1217 00:49:26.766120 1170766 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1217 00:49:26.766123 1170766 command_runner.go:130] > # 	"image_pulls_success_total",
	I1217 00:49:26.766131 1170766 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1217 00:49:26.766140 1170766 command_runner.go:130] > # 	"containers_oom_count_total",
	I1217 00:49:26.766144 1170766 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1217 00:49:26.766149 1170766 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1217 00:49:26.766160 1170766 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1217 00:49:26.766163 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766169 1170766 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1217 00:49:26.766173 1170766 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1217 00:49:26.766178 1170766 command_runner.go:130] > # The port on which the metrics server will listen.
	I1217 00:49:26.766182 1170766 command_runner.go:130] > # metrics_port = 9090
	I1217 00:49:26.766187 1170766 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1217 00:49:26.766195 1170766 command_runner.go:130] > # metrics_socket = ""
	I1217 00:49:26.766200 1170766 command_runner.go:130] > # The certificate for the secure metrics server.
	I1217 00:49:26.766206 1170766 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1217 00:49:26.766216 1170766 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1217 00:49:26.766221 1170766 command_runner.go:130] > # certificate on any modification event.
	I1217 00:49:26.766224 1170766 command_runner.go:130] > # metrics_cert = ""
	I1217 00:49:26.766230 1170766 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1217 00:49:26.766239 1170766 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1217 00:49:26.766243 1170766 command_runner.go:130] > # metrics_key = ""
	I1217 00:49:26.766249 1170766 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1217 00:49:26.766252 1170766 command_runner.go:130] > [crio.tracing]
	I1217 00:49:26.766257 1170766 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1217 00:49:26.766261 1170766 command_runner.go:130] > # enable_tracing = false
	I1217 00:49:26.766266 1170766 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1217 00:49:26.766270 1170766 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1217 00:49:26.766277 1170766 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1217 00:49:26.766287 1170766 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1217 00:49:26.766292 1170766 command_runner.go:130] > # CRI-O NRI configuration.
	I1217 00:49:26.766295 1170766 command_runner.go:130] > [crio.nri]
	I1217 00:49:26.766300 1170766 command_runner.go:130] > # Globally enable or disable NRI.
	I1217 00:49:26.766308 1170766 command_runner.go:130] > # enable_nri = true
	I1217 00:49:26.766312 1170766 command_runner.go:130] > # NRI socket to listen on.
	I1217 00:49:26.766320 1170766 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1217 00:49:26.766324 1170766 command_runner.go:130] > # NRI plugin directory to use.
	I1217 00:49:26.766328 1170766 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1217 00:49:26.766333 1170766 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1217 00:49:26.766338 1170766 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1217 00:49:26.766343 1170766 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1217 00:49:26.766396 1170766 command_runner.go:130] > # nri_disable_connections = false
	I1217 00:49:26.766406 1170766 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1217 00:49:26.766411 1170766 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1217 00:49:26.766416 1170766 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1217 00:49:26.766420 1170766 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1217 00:49:26.766425 1170766 command_runner.go:130] > # NRI default validator configuration.
	I1217 00:49:26.766431 1170766 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1217 00:49:26.766438 1170766 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1217 00:49:26.766447 1170766 command_runner.go:130] > # can be restricted/rejected:
	I1217 00:49:26.766451 1170766 command_runner.go:130] > # - OCI hook injection
	I1217 00:49:26.766456 1170766 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1217 00:49:26.766466 1170766 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1217 00:49:26.766471 1170766 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1217 00:49:26.766475 1170766 command_runner.go:130] > # - adjustment of linux namespaces
	I1217 00:49:26.766486 1170766 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1217 00:49:26.766493 1170766 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1217 00:49:26.766498 1170766 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1217 00:49:26.766501 1170766 command_runner.go:130] > #
	I1217 00:49:26.766505 1170766 command_runner.go:130] > # [crio.nri.default_validator]
	I1217 00:49:26.766509 1170766 command_runner.go:130] > # nri_enable_default_validator = false
	I1217 00:49:26.766519 1170766 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1217 00:49:26.766525 1170766 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1217 00:49:26.766531 1170766 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1217 00:49:26.766540 1170766 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1217 00:49:26.766545 1170766 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1217 00:49:26.766550 1170766 command_runner.go:130] > # nri_validator_required_plugins = [
	I1217 00:49:26.766558 1170766 command_runner.go:130] > # ]
	I1217 00:49:26.766567 1170766 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1217 00:49:26.766574 1170766 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1217 00:49:26.766579 1170766 command_runner.go:130] > [crio.stats]
	I1217 00:49:26.766584 1170766 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1217 00:49:26.766590 1170766 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1217 00:49:26.766597 1170766 command_runner.go:130] > # stats_collection_period = 0
	I1217 00:49:26.766603 1170766 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1217 00:49:26.766610 1170766 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1217 00:49:26.766618 1170766 command_runner.go:130] > # collection_period = 0
	I1217 00:49:26.769313 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.709999291Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1217 00:49:26.769335 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710041801Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1217 00:49:26.769350 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.7100717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1217 00:49:26.769358 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710096963Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1217 00:49:26.769367 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710182557Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:49:26.769376 1170766 command_runner.go:130] ! time="2025-12-17T00:49:26.710452795Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1217 00:49:26.769388 1170766 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1217 00:49:26.769780 1170766 cni.go:84] Creating CNI manager for ""
	I1217 00:49:26.769799 1170766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:49:26.769817 1170766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:49:26.769847 1170766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:49:26.769980 1170766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:49:26.770057 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:49:26.777246 1170766 command_runner.go:130] > kubeadm
	I1217 00:49:26.777268 1170766 command_runner.go:130] > kubectl
	I1217 00:49:26.777274 1170766 command_runner.go:130] > kubelet
	I1217 00:49:26.778436 1170766 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:49:26.778500 1170766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:49:26.786236 1170766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:49:26.799825 1170766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:49:26.813059 1170766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 00:49:26.828019 1170766 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:49:26.831670 1170766 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:49:26.831993 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:26.960014 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:27.502236 1170766 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:49:27.502256 1170766 certs.go:195] generating shared ca certs ...
	I1217 00:49:27.502272 1170766 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:27.502407 1170766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:49:27.502457 1170766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:49:27.502465 1170766 certs.go:257] generating profile certs ...
	I1217 00:49:27.502566 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:49:27.502627 1170766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:49:27.502667 1170766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:49:27.502675 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:49:27.502694 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:49:27.502705 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:49:27.502716 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:49:27.502725 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:49:27.502736 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:49:27.502746 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:49:27.502759 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:49:27.502805 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:49:27.502840 1170766 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:49:27.502848 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:49:27.502873 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:49:27.502896 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:49:27.502918 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:49:27.502963 1170766 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:49:27.502994 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.503007 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.503017 1170766 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.503565 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:49:27.523390 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:49:27.542159 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:49:27.560122 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:49:27.578247 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:49:27.596258 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:49:27.613943 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:49:27.632292 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:49:27.650819 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:49:27.669066 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:49:27.687617 1170766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:49:27.705744 1170766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:49:27.719458 1170766 ssh_runner.go:195] Run: openssl version
	I1217 00:49:27.725722 1170766 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:49:27.726120 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.733628 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:49:27.741335 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745236 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745284 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.745341 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:49:27.786230 1170766 command_runner.go:130] > 51391683
	I1217 00:49:27.786728 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:49:27.794669 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.802040 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:49:27.809799 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813741 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813839 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.813906 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:49:27.854690 1170766 command_runner.go:130] > 3ec20f2e
	I1217 00:49:27.854778 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:49:27.862235 1170766 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.869424 1170766 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:49:27.877608 1170766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881295 1170766 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881338 1170766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.881389 1170766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:49:27.921808 1170766 command_runner.go:130] > b5213941
	I1217 00:49:27.922298 1170766 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:49:27.929684 1170766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933543 1170766 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:49:27.933568 1170766 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:49:27.933576 1170766 command_runner.go:130] > Device: 259,1	Inode: 3648879     Links: 1
	I1217 00:49:27.933583 1170766 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:49:27.933589 1170766 command_runner.go:130] > Access: 2025-12-17 00:45:19.435586201 +0000
	I1217 00:49:27.933595 1170766 command_runner.go:130] > Modify: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933600 1170766 command_runner.go:130] > Change: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933605 1170766 command_runner.go:130] >  Birth: 2025-12-17 00:41:14.780595577 +0000
	I1217 00:49:27.933682 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:49:27.974244 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:27.974730 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:49:28.015269 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.015758 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:49:28.065826 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.066538 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:49:28.108358 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.108531 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:49:28.149181 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.149647 1170766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:49:28.190353 1170766 command_runner.go:130] > Certificate will not expire
	I1217 00:49:28.190474 1170766 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:49:28.190584 1170766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:49:28.190665 1170766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:49:28.221145 1170766 cri.go:89] found id: ""
	I1217 00:49:28.221267 1170766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:49:28.228507 1170766 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:49:28.228597 1170766 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:49:28.228619 1170766 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:49:28.229395 1170766 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:49:28.229438 1170766 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:49:28.229512 1170766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:49:28.236906 1170766 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:49:28.237356 1170766 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-389537" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.237502 1170766 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-389537" cluster setting kubeconfig missing "functional-389537" context setting]
	I1217 00:49:28.237796 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.238221 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.238396 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.238920 1170766 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:49:28.238939 1170766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:49:28.238945 1170766 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:49:28.238950 1170766 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:49:28.238954 1170766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:49:28.238995 1170766 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:49:28.239224 1170766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:49:28.246965 1170766 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 00:49:28.247039 1170766 kubeadm.go:602] duration metric: took 17.573937ms to restartPrimaryControlPlane
	I1217 00:49:28.247066 1170766 kubeadm.go:403] duration metric: took 56.597633ms to StartCluster
	I1217 00:49:28.247104 1170766 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.247179 1170766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.247837 1170766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:49:28.248043 1170766 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:49:28.248489 1170766 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:49:28.248569 1170766 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:49:28.248676 1170766 addons.go:70] Setting storage-provisioner=true in profile "functional-389537"
	I1217 00:49:28.248696 1170766 addons.go:239] Setting addon storage-provisioner=true in "functional-389537"
	I1217 00:49:28.248719 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.249218 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.251024 1170766 addons.go:70] Setting default-storageclass=true in profile "functional-389537"
	I1217 00:49:28.251049 1170766 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-389537"
	I1217 00:49:28.251367 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.254651 1170766 out.go:179] * Verifying Kubernetes components...
	I1217 00:49:28.257533 1170766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:49:28.287633 1170766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:49:28.290502 1170766 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.290526 1170766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:49:28.290609 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.312501 1170766 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:49:28.312677 1170766 kapi.go:59] client config for functional-389537: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:49:28.312998 1170766 addons.go:239] Setting addon default-storageclass=true in "functional-389537"
	I1217 00:49:28.313045 1170766 host.go:66] Checking if "functional-389537" exists ...
	I1217 00:49:28.313499 1170766 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:49:28.334272 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.347658 1170766 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:28.347681 1170766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:49:28.347742 1170766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:49:28.374030 1170766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:49:28.486040 1170766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:49:28.502536 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:28.510858 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.252938 1170766 node_ready.go:35] waiting up to 6m0s for node "functional-389537" to be "Ready" ...
	I1217 00:49:29.253062 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.253118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.253338 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253370 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253391 1170766 retry.go:31] will retry after 245.662002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253435 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.253452 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253459 1170766 retry.go:31] will retry after 276.192706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.253512 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.500088 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:29.530677 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.579588 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.579743 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.579792 1170766 retry.go:31] will retry after 478.611243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607395 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.607453 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.607473 1170766 retry.go:31] will retry after 213.763614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.753751 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:29.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:29.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:29.822424 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:29.886054 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:29.886099 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:29.886150 1170766 retry.go:31] will retry after 580.108639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.059411 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.142412 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.142520 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.142548 1170766 retry.go:31] will retry after 335.340669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.253845 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.254297 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:30.466582 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:30.478378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:30.546834 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.546919 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.546953 1170766 retry.go:31] will retry after 1.248601584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557846 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:30.557940 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.557983 1170766 retry.go:31] will retry after 1.081200972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:30.753182 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:30.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:30.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.253427 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.253542 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:31.253954 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:31.639465 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:31.698941 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.698993 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.699013 1170766 retry.go:31] will retry after 1.870151971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.754126 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:31.754197 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:31.754530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:31.795965 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:31.861932 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:31.861982 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:31.862003 1170766 retry.go:31] will retry after 1.008225242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.253184 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.253372 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.253717 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:32.753360 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:32.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:32.871155 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:32.928211 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:32.931741 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:32.931825 1170766 retry.go:31] will retry after 1.349013392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.253256 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:33.569378 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:33.627393 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:33.631136 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.631170 1170766 retry.go:31] will retry after 1.556307432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:33.753384 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:33.753462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:33.753732 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:33.753786 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:34.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.253674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:34.281872 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:34.338860 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:34.338952 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.338994 1170766 retry.go:31] will retry after 2.730785051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:34.753261 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:34.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:34.753705 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.188371 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:35.253305 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.253379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.253659 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:35.253682 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253699 1170766 retry.go:31] will retry after 4.092845301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:35.253755 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:35.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:35.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:35.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:36.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:36.253666 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:36.753252 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:36.753327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:36.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.070065 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:37.127098 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:37.130934 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.130970 1170766 retry.go:31] will retry after 4.776908541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:37.253166 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.253608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:37.753194 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:37.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:37.753659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:38.253587 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.253946 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:38.254001 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:38.753912 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:38.753994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:38.754371 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.254004 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.254408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:39.346816 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:39.407133 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:39.411576 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.411608 1170766 retry.go:31] will retry after 4.420378296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:39.753168 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:39.753277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:39.753541 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.253304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:40.753271 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:40.753349 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:40.753656 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:40.753707 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:41.253157 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.253546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:41.753310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:41.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:41.909084 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:41.968890 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:41.968925 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:41.968945 1170766 retry.go:31] will retry after 4.028082996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:42.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.253706 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:42.753164 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:42.753238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:42.753522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.253354 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.253724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:43.253792 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:43.753558 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:43.753644 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:43.753949 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:43.832189 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:43.890902 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:43.894375 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:43.894408 1170766 retry.go:31] will retry after 8.166287631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:44.253620 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:44.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:44.753652 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:44.753996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.253708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.254080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:45.254153 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:45.753590 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:45.753659 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:45.753909 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:45.997293 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:46.061414 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:46.061451 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.061470 1170766 retry.go:31] will retry after 11.083982648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:46.253886 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.253962 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.254309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:46.754095 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:46.754205 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:46.754534 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.253185 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.253531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:47.753195 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:47.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:47.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:47.753675 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:48.253335 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.253411 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.253779 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:48.753583 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:48.753654 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:48.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.253646 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.254063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:49.753928 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:49.754007 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:49.754325 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:49.754377 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:50.253612 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.253695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.253960 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:50.753804 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:50.753885 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:50.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.254063 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.254137 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.254480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:51.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:51.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:51.753480 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:52.060996 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:52.120691 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:52.124209 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.124248 1170766 retry.go:31] will retry after 5.294346985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:52.253619 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.253696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:52.254054 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:52.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:52.753693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:53.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:53.753855 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:53.754194 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:54.254037 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.254206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.254462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:54.254510 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:54.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:54.753239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:54.753523 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.253651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:55.753370 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:55.753449 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:55.753783 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.253267 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.253341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:56.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:56.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:56.753617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:56.753681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:57.146315 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:49:57.205486 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.209162 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.209194 1170766 retry.go:31] will retry after 16.847278069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.253385 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.253754 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:57.419134 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:49:57.479419 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:49:57.482994 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.483029 1170766 retry.go:31] will retry after 11.356263683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:49:57.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:57.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:57.753493 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.253330 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.253407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:58.753639 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:58.753716 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:58.754093 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:49:58.754160 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:49:59.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.254003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:49:59.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:49:59.753887 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:49:59.754215 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.253724 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.253810 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.254155 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:00.754120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:00.754206 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:00.754562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:00.754621 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:01.253240 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.253370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.253698 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:01.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:01.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:01.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.253607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:02.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:02.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:02.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:03.253193 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:03.253613 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:03.753572 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:03.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:03.754045 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.253947 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.254268 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:04.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:04.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:04.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:05.253850 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:05.254364 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:05.754125 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:05.754208 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:05.754551 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.253164 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.253237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:06.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:06.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.253346 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.253428 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.253751 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:07.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:07.753540 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:07.753830 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:07.753881 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:08.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.253424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.253762 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.753666 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:08.753745 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:08.754125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:08.840442 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:08.894240 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:08.898223 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:08.898257 1170766 retry.go:31] will retry after 31.216976051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:09.253588 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.253672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:09.753741 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:09.753825 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:09.754120 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:09.754170 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:10.253935 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.254009 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:10.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:10.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:10.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.253844 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.253918 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.254271 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:11.754088 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:11.754175 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:11.754499 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:11.754558 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:12.253187 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.253522 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:12.753227 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:12.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:12.753589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:13.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:13.753701 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:13.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:14.057576 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:14.115415 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:14.119129 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.119165 1170766 retry.go:31] will retry after 28.147339136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:14.253462 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.253544 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.253877 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:14.253932 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:14.753601 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:14.753672 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:14.753968 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.253641 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.253732 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.253997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:15.753777 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:15.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:15.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:16.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.253982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.254308 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:16.254362 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:16.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:16.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:16.754016 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.253840 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.253928 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.254281 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:17.754086 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:17.754162 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:17.754503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.253672 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:18.753651 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:18.753736 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:18.754062 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:18.754120 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:19.253943 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.254033 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.254372 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:19.753082 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:19.753159 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:19.753506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:20.753388 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:20.753479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:20.753884 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:21.253615 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.253955 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:21.254007 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:21.753781 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:21.753865 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:21.754189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.254001 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.254355 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:22.753077 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:22.753153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:22.753404 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.253112 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.253188 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.253528 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:23.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:23.753620 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:23.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:23.753996 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:24.253660 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.253733 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.254004 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:24.753783 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:24.753862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:24.754204 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.253869 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.253944 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.254293 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:25.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:25.753710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:25.753985 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:25.754034 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:26.253773 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.253845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:26.753983 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:26.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:26.754381 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.253979 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:27.753096 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:27.753176 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:27.753474 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:28.253306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:28.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:28.753591 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:28.753660 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:28.753916 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.253231 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:29.753237 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:29.753336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:29.753688 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.253320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:30.753246 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:30.753320 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:30.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:30.753699 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:31.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.253635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:31.753306 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:31.753379 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:31.753638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.253669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:32.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:32.753350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:32.753691 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:32.753743 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:33.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.253478 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.253794 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:33.753653 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:33.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:33.754080 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.253900 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.254314 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:34.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:34.753727 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:34.754008 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:34.754052 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:35.253868 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.253945 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.254265 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:35.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:35.753720 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:35.754034 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.253598 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:36.753708 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:36.753783 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:36.754104 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:36.754165 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:37.253918 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.253995 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.254311 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:37.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:37.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:37.753961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.253926 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.254006 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.254296 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:38.754122 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:38.754199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:38.754549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:38.754615 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:39.253269 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.253710 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:39.753180 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:39.753267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:39.753624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.116186 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:50:40.183350 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:40.183412 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.183435 1170766 retry.go:31] will retry after 25.382750455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:40.253664 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.253739 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.254066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:40.753634 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:40.753706 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:40.753966 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:41.253718 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.253791 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.254134 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:41.254188 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:41.754033 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:41.754109 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:41.754488 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.253178 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.253257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.253626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:42.266982 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:42.344498 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:42.344537 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.344558 1170766 retry.go:31] will retry after 17.409313592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:50:42.753120 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:42.753194 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:42.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:43.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:43.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:43.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:44.253776 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.253851 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.254170 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:44.753822 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:44.753901 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:44.754256 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.253756 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.253922 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.254427 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:45.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:45.753326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:45.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:46.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:46.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:46.753299 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:46.753383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:46.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.253287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:47.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:47.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:47.753623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:48.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.253662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:48.253705 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:48.753669 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:48.753752 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:48.754072 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.253894 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.253970 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.254291 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:49.753636 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:49.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:49.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:50.253843 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.253926 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.254289 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:50.254345 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:50.754111 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:50.754190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:50.754553 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.253242 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:51.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:51.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:51.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:52.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:52.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:52.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:52.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:53.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:53.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:53.754044 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:53.754422 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.253188 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.253562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:54.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:54.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:54.753632 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:54.753689 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:55.253391 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.253469 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:55.753512 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:55.753582 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:55.753864 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:56.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:56.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:57.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.253390 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:57.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:57.753200 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:57.753283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:57.753631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.253436 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.253523 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.253931 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:58.753948 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:58.754017 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:58.754272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.254035 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.254118 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.254476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:50:59.254537 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:50:59.753199 1170766 type.go:168] "Request Body" body=""
	I1217 00:50:59.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:50:59.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:50:59.754864 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:50:59.815839 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815879 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:50:59.815961 1170766 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:00.253363 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.253878 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:00.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:00.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:00.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:01.753302 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:01.753369 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:01.753727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:01.753787 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:02.253228 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.253347 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.253689 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:02.753247 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:02.753324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:02.753665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.253300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.253575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:03.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:03.753699 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:03.754077 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:03.754136 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:04.253779 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.254148 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:04.753646 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:04.753717 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:04.753978 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.253862 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.253937 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.254272 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:05.566658 1170766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:51:05.627909 1170766 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.627957 1170766 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:51:05.628043 1170766 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:51:05.631111 1170766 out.go:179] * Enabled addons: 
	I1217 00:51:05.634718 1170766 addons.go:530] duration metric: took 1m37.386158891s for enable addons: enabled=[]
	I1217 00:51:05.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:05.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:05.753674 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:06.253279 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.253356 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.253609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:06.253651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:06.753202 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:06.753286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:06.753613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.253337 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.253416 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:07.753382 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:07.753456 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:07.753719 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:08.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.253394 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:08.253801 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:08.753597 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:08.753675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:08.754006 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.253704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.253961 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:09.753759 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:09.753861 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:09.754219 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:10.254036 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.254117 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.254443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:10.254499 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:10.753146 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:10.753222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:10.753504 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.253217 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:11.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:11.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:11.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.253431 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.253508 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.253817 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:12.753238 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:12.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:12.753597 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:12.753661 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:13.253229 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:13.753608 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:13.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:14.753242 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:14.753314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:14.753606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:15.253289 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.253371 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:15.253681 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:15.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:15.753291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:15.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.253206 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.253285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.253602 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:16.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:16.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:17.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.253331 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.253661 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:17.253717 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:17.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:17.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:17.753577 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.253297 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.253364 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.253668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:18.753853 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:18.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:18.754277 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:19.254102 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.254185 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.254526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:19.254586 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:19.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:19.753311 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:19.753580 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.253722 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:20.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:20.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:20.753652 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.253372 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.253701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:21.753406 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:21.753495 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:21.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:21.753874 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:22.253257 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.253333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:22.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:22.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:22.753561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.253313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:23.753603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:23.753685 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:23.754925 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1217 00:51:23.754986 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:24.253170 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.253267 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.253617 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:24.753328 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:24.753409 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:24.753746 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.253469 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.253546 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.253880 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:25.753574 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:25.753657 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:25.753917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:26.253603 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.253711 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.254049 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:26.254102 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:26.753618 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:26.753694 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:26.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.253638 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.253707 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:27.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:27.753801 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:27.754135 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:28.253730 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.253819 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.254157 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:28.254213 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:28.754062 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:28.754150 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:28.754428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.253246 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.253601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:29.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:29.753316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:29.753701 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:30.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:30.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:30.753680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:30.753758 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:31.253283 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.253382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.253833 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:31.753547 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:31.753617 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:31.753891 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.253582 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:32.753873 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:32.753956 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:32.754335 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:32.754410 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:33.253082 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.253153 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.253408 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:33.753211 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:33.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:33.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.253332 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.253414 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.253813 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:34.753517 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:34.753595 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:34.753879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:35.253210 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.253671 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:35.253725 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:35.753393 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:35.753476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:35.753815 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.253180 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.253262 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.253769 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:36.753214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:36.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:36.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.253245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.253568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:37.753118 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:37.753199 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:37.753448 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:37.753489 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:38.253352 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.253435 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.253790 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:38.753633 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:38.753713 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:38.754052 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.253630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.253702 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.254026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:39.753642 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:39.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:39.754056 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:39.754113 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:40.253723 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.253798 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.254106 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:40.753630 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:40.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:40.754024 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.253834 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.253927 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.254334 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:41.754152 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:41.754231 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:41.754552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:41.754611 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:42.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.253335 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:42.753201 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:42.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:42.753641 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.253361 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.253440 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:43.753589 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:43.753665 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:43.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:44.253738 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.253820 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.254118 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:44.254169 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:44.753956 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:44.754034 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:44.754376 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.253875 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.253954 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.254382 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:45.753128 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:45.753232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:45.753548 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.253245 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.253330 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:46.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:46.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:46.753570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:46.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:47.253226 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.253657 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:47.753364 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:47.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:47.753750 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.253392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:48.753652 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:48.753737 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:48.754073 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:48.754130 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:49.253766 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.253847 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.254210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:49.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:49.753704 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:49.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.253788 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.253862 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.254182 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:50.753997 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:50.754076 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:50.754412 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:50.754497 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:51.253162 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.253230 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:51.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:51.753249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:51.753596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:52.753309 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:52.753387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:52.753660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:53.253234 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.253323 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.253702 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:53.253761 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:53.753655 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:53.753749 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:53.754112 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.253684 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.253936 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:54.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:54.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:54.753647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.253232 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.253310 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:55.753166 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:55.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:55.753558 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:55.753612 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:56.253214 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.253610 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:56.753338 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:56.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.253172 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.253264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.253533 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:57.753301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:57.753667 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:57.753734 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:51:58.253323 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:58.753599 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:58.753674 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:58.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.253780 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.253867 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.254242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:51:59.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:51:59.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:51:59.754384 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:51:59.754441 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:00.261843 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.262054 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.262449 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:00.753175 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:00.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:00.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.253252 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:01.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:01.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:01.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:02.253251 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.253328 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.253683 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:02.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:02.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:02.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:02.753677 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.253347 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.253422 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:03.753719 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:03.753793 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:03.754114 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:04.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.253975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:04.254346 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:04.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:04.753693 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:04.753969 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.253791 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.253873 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.254220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:05.753910 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:05.753984 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:05.754315 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.253622 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.253718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.254014 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:06.753813 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:06.753893 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:06.754190 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:06.754244 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:07.254039 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.254467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:07.753167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:07.753245 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:07.753517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.253390 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.253765 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:08.753749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:08.753834 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:08.754171 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:09.253662 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.253741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.254087 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:09.254142 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:09.753914 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:09.753986 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:09.754327 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.254174 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.254257 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.254595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:10.753297 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:10.753370 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:10.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.253408 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.253499 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.253838 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:11.753526 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:11.753601 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:11.753894 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:11.753949 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:12.253560 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.253632 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.253941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:12.753728 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:12.753805 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:12.754169 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.253986 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.254075 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.254435 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:13.754109 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:13.754186 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:13.754492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:13.754550 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:14.253219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.253294 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.253640 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:14.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:14.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:14.753663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.253555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:15.753179 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:15.753256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:15.753575 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:16.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.253273 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.253612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:16.253668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:16.753300 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:16.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:16.753651 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.253314 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.253388 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.253700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:17.753205 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:17.753281 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:17.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:18.253374 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.253447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.253699 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:18.253739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:18.753640 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:18.753718 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:18.754063 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.253884 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.253974 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.254336 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:19.753626 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:19.753695 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:19.753959 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:20.253726 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.253806 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.254124 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:20.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:20.753974 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:20.754048 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:20.754388 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.253158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.253440 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:21.753204 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:21.753282 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:21.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.253238 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.253340 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.253660 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:22.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:22.753353 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:22.753608 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:23.253247 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:23.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:23.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:23.753668 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.253272 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.253348 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.253619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:24.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:24.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:24.753622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:25.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.253295 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.253630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:25.253693 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:25.753259 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:25.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:25.753591 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.253192 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.253269 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:26.753310 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:26.753396 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:26.753773 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.253223 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.253476 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:27.753141 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:27.753220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:27.753542 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:27.753593 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:28.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.253455 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.253810 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:28.753775 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:28.753915 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:28.754546 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.253345 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.253740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:29.753241 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:29.753321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:29.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:29.753656 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:30.253171 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.253247 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.253502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:30.753270 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:30.753355 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:30.753724 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.253230 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:31.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:31.753227 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:31.753572 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:32.253250 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.253653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:32.253716 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:32.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:32.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:32.753643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.253201 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.253278 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:33.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:33.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:33.753973 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:34.253749 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.253821 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.254108 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:34.254159 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:34.753679 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:34.753775 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:34.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.253885 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.253959 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.254288 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:35.754073 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:35.754148 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:35.754487 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.253486 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:36.753266 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:36.753378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:36.753774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:36.753831 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:37.253510 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.253591 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.253957 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:37.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:37.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:37.753941 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.253981 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.254058 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:38.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:38.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:38.753581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:39.253264 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:39.253650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:39.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:39.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:39.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.253402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.253743 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:40.753429 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:40.753503 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:40.753767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:41.253215 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.253631 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:41.253687 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:41.753222 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:41.753303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:41.753639 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.253183 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.253286 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:42.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:42.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:42.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.253225 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.253301 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.253637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:43.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:43.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:43.753531 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:43.753579 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:44.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.253576 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:44.753186 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:44.753264 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:44.753599 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.253295 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.253735 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:45.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:45.753285 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:45.753614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:45.753668 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:46.253322 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.253398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.253742 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:46.753161 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:46.753241 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:46.753496 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.253291 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.253616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:47.753226 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:47.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:47.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:47.753697 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:48.253324 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:48.753549 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:48.753624 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:48.753971 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.253784 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:49.753644 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:49.753723 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:49.754017 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:49.754065 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:50.253810 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.254239 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:50.753899 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:50.753975 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:50.754306 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.253987 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:51.753830 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:51.753910 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:51.754242 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:51.754311 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:52.254071 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.254149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.254484 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:52.753615 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:52.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:52.753942 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.253691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.254010 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:53.753953 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:53.754027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:53.754345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:53.754402 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:54.253614 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.253688 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.253938 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:54.753755 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:54.753827 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:54.754137 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.253941 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.254028 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.254370 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:55.754085 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:55.754158 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:55.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:55.754529 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:56.253088 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.253170 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.253491 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:56.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:56.753309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:56.753629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.253189 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.253537 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:57.753221 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:57.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:57.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:58.253261 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.253336 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.253670 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:52:58.253729 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:52:58.753657 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:58.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:58.754036 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.253988 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.254385 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:52:59.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:52:59.753169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:52:59.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:00.255875 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.256036 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.256356 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:00.256590 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:00.753314 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:00.753406 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:00.753729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.253451 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.253526 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.253836 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:01.753275 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:01.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:01.753592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.253224 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:02.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:02.753389 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:02.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:02.753739 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:03.253378 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.253463 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.253737 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:03.753753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:03.753845 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:03.754210 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.253955 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.254035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:04.753631 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:04.753709 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:04.753974 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:04.754016 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:05.253893 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.254027 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:05.753103 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:05.753190 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:05.753552 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.253106 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.253183 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.253481 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:06.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:06.753270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:06.753579 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:07.253213 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.253288 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.253606 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:07.253665 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:07.753163 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:07.753237 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:07.753615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.253519 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.253592 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.253905 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:08.753950 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:08.754029 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:08.754407 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:09.253610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.253927 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:09.253968 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:09.753621 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:09.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:09.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.253736 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.253811 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.254126 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:10.753610 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:10.753682 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:10.753989 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:11.253783 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.253860 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.254192 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:11.254252 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:11.754017 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:11.754095 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:11.754418 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.253091 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.253174 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.253431 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:12.753169 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:12.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:12.753584 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.253299 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.253383 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.253725 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:13.753614 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:13.753680 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:13.753954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:13.753997 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:14.253722 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.253802 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.254151 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:14.753815 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:14.753891 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:14.754223 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.253734 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.254029 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:15.753806 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:15.753888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:15.754227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:15.754287 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:16.254074 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.254151 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.254498 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:16.753147 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:16.753225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:16.753479 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.253173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.253581 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:17.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:17.753255 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:17.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:18.253273 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.253344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.253604 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:18.253646 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:18.753564 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:18.753634 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:18.753924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.253242 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:19.753253 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:19.753337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:19.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:20.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.253655 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:20.253718 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:20.753425 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:20.753514 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:20.753897 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.253315 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.253583 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:21.753260 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:21.753341 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:21.753692 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.253297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.253625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:22.753263 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:22.753343 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:22.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:22.753650 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:23.253236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.253309 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.253636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:23.753622 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:23.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:23.754022 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.253611 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.253690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.253964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:24.753690 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:24.753765 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:24.754071 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:24.754119 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:25.253897 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.253969 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.254295 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:25.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:25.753690 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:25.753992 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.253798 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.253879 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.254195 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:26.754019 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:26.754098 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:26.754443 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:26.754501 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:27.253155 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.253228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:27.753191 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:27.753266 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:27.753626 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.253426 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.253518 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.253857 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:28.753672 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:28.753767 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:28.754076 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:29.254090 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.254181 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.254562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:29.254618 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:29.753292 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:29.753381 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:29.753726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.253402 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.253471 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:30.753408 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:30.753487 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:30.753850 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.253221 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.253298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.253663 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:31.753232 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:31.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:31.753559 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:31.753600 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:32.253254 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.253337 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.253682 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:32.755552 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:32.755633 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:32.755956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.253673 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.253924 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:33.753903 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:33.753982 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:33.754307 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:33.754366 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:34.254124 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.254211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.254539 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:34.753159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:34.753234 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:34.753549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.253211 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.253289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.253627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:35.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:35.753398 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:35.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:36.253412 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.253489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.253839 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:36.253891 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:36.753198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:36.753274 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:36.753609 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.253321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.253397 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.253727 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:37.753428 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:37.753500 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:37.753749 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:38.253689 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.253766 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.254125 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:38.254183 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:38.753984 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:38.754059 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:38.754410 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.253122 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.253198 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.253459 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:39.753151 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:39.753259 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:39.753585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.253326 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.253413 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.253767 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:40.753462 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:40.753533 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:40.753812 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:40.753859 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:41.253209 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.253596 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:41.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:41.753268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:41.753605 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.253220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.253303 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.253613 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:42.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:42.753300 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:42.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:43.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.253419 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:43.254022 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:43.753920 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:43.754014 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:43.754333 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.253118 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.253201 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.253526 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:44.753236 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:44.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:44.753642 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:45.255002 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.255152 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.255478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:45.255533 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:45.753317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:45.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.253364 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.253445 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.253796 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:46.753160 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:46.753240 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:46.753574 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.253198 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.253614 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:47.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:47.753402 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:47.753748 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:47.753808 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:48.253313 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.253387 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.253647 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:48.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:48.753728 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:48.754069 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.253753 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.253830 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.254168 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:49.753643 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:49.753731 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:49.754066 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:49.754148 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:50.253912 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.253994 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.254341 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:50.753099 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:50.753189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:50.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.253251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.253515 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:51.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:51.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:51.753662 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:52.253411 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.253511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.253890 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:52.253964 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:52.753645 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:52.753719 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:52.753976 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.253775 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.253856 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.254202 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:53.754104 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:53.754180 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:53.754506 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.253165 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.253239 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.253494 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:54.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:54.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:54.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:54.753683 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:55.253358 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.253438 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.253774 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:55.753173 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:55.753250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:55.753511 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.253177 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.253263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.253600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:56.753321 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:56.753404 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:56.753745 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:56.753805 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:57.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.253238 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.253492 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:57.753218 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:57.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:57.753634 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.253497 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.253572 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.253908 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:58.753619 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:58.753689 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:58.753944 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:53:58.753983 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:53:59.253741 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.253823 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.254166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:53:59.753959 1170766 type.go:168] "Request Body" body=""
	I1217 00:53:59.754035 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:53:59.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.253101 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.253195 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.253561 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:00.753249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:00.753333 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:00.753690 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:01.253394 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.253476 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.253809 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:01.253884 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:01.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:01.753357 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:01.753611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.253331 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.253412 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.253739 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:02.753476 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:02.753557 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:02.753921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:03.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.253662 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:03.253961 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:03.753990 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:03.754066 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:03.754393 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.253149 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.253243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.253594 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:04.753286 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:04.753367 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:04.753648 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.253317 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.253644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:05.753380 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:05.753466 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:05.753795 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:05.753852 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:06.253248 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:06.753209 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:06.753284 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:06.753562 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.253241 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.253321 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:07.753165 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:07.753244 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:07.753502 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:08.253274 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.253352 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.253726 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:08.253781 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:08.753770 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:08.753843 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:08.754162 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.253596 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.253675 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.253945 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:09.753821 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:09.753904 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:09.754197 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:10.254043 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.254115 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.254442 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:10.254495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:10.753142 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:10.753213 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:10.753467 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.253587 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:11.753279 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:11.753382 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:11.753753 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.253159 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.253233 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:12.753230 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:12.753305 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:12.753627 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:12.753685 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:13.253376 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.253460 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.253784 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:13.753616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:13.753691 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:13.753997 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.253819 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.253898 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.254259 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:14.754072 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:14.754149 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:14.754478 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:14.754538 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:15.253169 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.253248 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.253513 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:15.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:15.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:15.753616 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.253342 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.253423 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.253764 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:16.753262 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:16.753339 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:16.753601 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:17.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.253350 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.253713 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:17.253779 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:17.753480 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:17.753569 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:17.753929 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.253593 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.253664 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.253921 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:18.753923 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:18.754002 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:18.754397 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.253225 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.253564 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:19.753254 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:19.753329 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:19.753595 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:19.753636 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:20.253334 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.253720 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:20.753212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:20.753289 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:20.753619 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.253154 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.253232 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:21.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:21.753296 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:21.753620 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:21.753680 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:22.253524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.253671 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.254279 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:22.753625 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:22.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:22.753972 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.253809 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.253888 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.254196 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:23.754021 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:23.754101 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:23.754439 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:23.754495 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:24.253174 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.253250 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.253623 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:24.753225 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:24.753302 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:24.753607 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.253319 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.253403 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.253757 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:25.753190 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:25.753263 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:25.753530 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:26.253258 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.253351 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.253693 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:26.253746 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:26.753414 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:26.753490 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:26.753826 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.253253 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.253565 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:27.753244 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:27.753318 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:27.753673 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:28.253404 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.253479 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.253776 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:28.253819 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:28.753595 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:28.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:28.753935 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.253381 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.253465 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.253954 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:29.753737 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:29.753815 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:29.754158 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:30.253604 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.253677 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.253951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:30.253995 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:30.753581 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:30.753666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:30.753956 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.253745 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.253824 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.254143 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:31.753606 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:31.753692 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:31.754026 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:32.253830 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.254262 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:32.254319 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:32.754091 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:32.754169 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:32.754555 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.253132 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.253222 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.253543 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:33.753524 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:33.753608 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:33.753895 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.253616 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.253698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.254032 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:34.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:34.753696 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:34.753951 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:34.753991 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:35.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.253882 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.254227 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:35.754036 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:35.754112 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:35.754409 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.253093 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.253164 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.253416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:36.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:36.753271 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:36.753557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:37.253294 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.253378 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.253664 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:37.253713 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:37.753375 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:37.753452 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:37.753723 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.253304 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.253376 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.253712 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:38.753592 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:38.753667 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:38.754003 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:39.253608 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.253678 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.253933 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:39.253982 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:39.753743 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:39.753818 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:39.754166 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.253997 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.254080 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.254396 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:40.753116 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:40.753229 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:40.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.253645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:41.753348 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:41.753424 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:41.753761 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:41.753817 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:42.265137 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.265218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.265549 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:42.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:42.753312 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:42.753653 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.253379 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.253462 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.253788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:43.753627 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:43.753708 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:43.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:43.754014 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:44.253832 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.253905 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.254217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:44.754035 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:44.754111 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:44.754446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.253168 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.253260 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:45.753216 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:45.753292 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:45.753612 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:46.253316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.253442 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.253729 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:46.253773 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:46.753434 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:46.753511 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:46.753766 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.253200 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.253277 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.253570 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:47.753267 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:47.753344 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:47.753625 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:48.253552 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.253626 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.253879 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:48.253930 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:48.753836 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:48.753911 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:48.754217 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.254026 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.254106 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.254428 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:49.753598 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:49.753686 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:49.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:50.253804 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.253886 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.254209 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:50.254259 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:50.754039 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:50.754125 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:50.754461 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.253140 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.253209 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.253462 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:51.753185 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:51.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:51.753628 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.253212 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.253290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.253618 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:52.753250 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:52.753332 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:52.753598 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:52.753651 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:53.253249 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.253324 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.253615 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:53.753664 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:53.753741 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:53.754081 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.253591 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.253669 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.254015 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:54.753866 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:54.753946 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:54.754274 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:54.754329 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:55.254057 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.254131 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.254446 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:55.753121 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:55.753211 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:55.753456 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.253179 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.253253 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.253557 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:56.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:56.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:56.753600 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:57.253260 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.253327 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.253611 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:57.253672 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:57.753316 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:57.753392 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:57.753736 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.253532 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.253606 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.253910 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:58.753623 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:58.753700 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:58.754000 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:54:59.253655 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.253799 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.254130 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:54:59.254190 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:54:59.753954 1170766 type.go:168] "Request Body" body=""
	I1217 00:54:59.754031 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:54:59.754326 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.260359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.261314 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.266189 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:00.753848 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:00.753931 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:00.754264 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:01.253975 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.254047 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.254345 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:01.254396 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:01.753628 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:01.753698 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:01.753979 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.253768 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.253842 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.254146 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:02.753809 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:02.753881 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:02.754220 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.253637 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.253710 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.253982 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:03.753913 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:03.753997 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:03.754309 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:03.754367 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:04.254115 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.254189 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.254536 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:04.753092 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:04.753161 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:04.753416 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.253139 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.253218 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.253585 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:05.753162 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:05.753243 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:05.753568 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:06.253362 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.253441 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.253697 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:06.253744 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:06.753208 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:06.753290 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:06.753637 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.253244 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.253319 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.253680 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:07.753378 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:07.753454 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:07.753700 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:08.253293 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.253374 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.253718 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:08.253778 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:08.753537 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:08.753616 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:08.753964 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.253654 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.253730 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.254027 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:09.753808 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:09.753883 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:09.754221 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:10.254047 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.254124 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.254490 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:10.254545 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:10.753157 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:10.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:10.753567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.253243 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.253322 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.253638 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:11.753219 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:11.753304 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:11.753650 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.253202 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.253270 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.253527 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:12.753210 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:12.753287 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:12.753644 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:12.753698 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:13.253182 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.253256 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.253592 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:13.753156 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:13.753228 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:13.753477 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.253233 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.253316 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.253658 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:14.753394 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:14.753489 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:14.753829 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:14.753892 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:15.253167 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.253249 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.253560 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:15.753196 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:15.753275 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:15.753586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.253207 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.253283 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.253622 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:16.753176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:16.753251 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:16.753503 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:17.253218 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.253293 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.253643 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:17.253694 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:17.753223 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:17.753298 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:17.753630 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.253246 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.253326 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.253586 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:18.753702 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:18.753779 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:18.754110 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:19.253939 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.254018 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.254367 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:19.254421 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:19.754123 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:19.754196 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:19.754517 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.253190 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.253268 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.253624 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:20.753327 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:20.753407 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:20.753740 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.253432 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.253502 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.253792 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:21.753215 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:21.753297 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:21.753636 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:21.753701 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:22.253195 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.253276 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.253567 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:22.753171 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:22.753235 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:22.753489 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.253145 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.253220 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.253589 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:23.753228 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:23.753306 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:23.753645 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:24.253343 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.253408 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.253665 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:24.253703 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:24.753359 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:24.753447 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:24.753788 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.253481 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.253571 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.253917 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:25.753617 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:25.753683 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:25.753937 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.253216 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.253299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.253629 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:26.753220 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:26.753299 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:26.753635 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:26.753691 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:27.253329 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.253399 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.253659 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:27.753345 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:27.753420 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:27.753799 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.253586 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.253666 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.253996 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1217 00:55:28.753229 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:28.753313 1170766 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-389537" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1217 00:55:28.753669 1170766 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1217 00:55:28.753726 1170766 node_ready.go:55] error getting node "functional-389537" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-389537": dial tcp 192.168.49.2:8441: connect: connection refused
	I1217 00:55:29.253176 1170766 type.go:168] "Request Body" body=""
	I1217 00:55:29.253235 1170766 node_ready.go:38] duration metric: took 6m0.000252571s for node "functional-389537" to be "Ready" ...
	I1217 00:55:29.256355 1170766 out.go:203] 
	W1217 00:55:29.259198 1170766 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:55:29.259223 1170766 out.go:285] * 
	W1217 00:55:29.261375 1170766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:55:29.264098 1170766 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:55:38 functional-389537 crio[5405]: time="2025-12-17T00:55:38.048031022Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4706890d-4ffd-4e66-b223-478f61947678 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.114719131Z" level=info msg="Checking image status: minikube-local-cache-test:functional-389537" id=9fe80880-0b65-492c-9f4c-0a010a90fb87 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.115307967Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.115357443Z" level=info msg="Image minikube-local-cache-test:functional-389537 not found" id=9fe80880-0b65-492c-9f4c-0a010a90fb87 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.115430007Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-389537 found" id=9fe80880-0b65-492c-9f4c-0a010a90fb87 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.139497123Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-389537" id=e1e97712-7b25-4c58-8ce9-e8ccc3d68083 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.139635048Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-389537 not found" id=e1e97712-7b25-4c58-8ce9-e8ccc3d68083 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.139677952Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-389537 found" id=e1e97712-7b25-4c58-8ce9-e8ccc3d68083 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.166406933Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-389537" id=6f08da1a-8932-466e-80c4-fbb2a61e7b9d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.166541256Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-389537 not found" id=6f08da1a-8932-466e-80c4-fbb2a61e7b9d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:39 functional-389537 crio[5405]: time="2025-12-17T00:55:39.166583856Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-389537 found" id=6f08da1a-8932-466e-80c4-fbb2a61e7b9d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.15548198Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8c29c4dc-94d8-4d41-85fa-eedb6c6e43bd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.478576539Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=af5f534c-bde6-4b85-8f73-bbe613827a8b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.478739227Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=af5f534c-bde6-4b85-8f73-bbe613827a8b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:40 functional-389537 crio[5405]: time="2025-12-17T00:55:40.47878291Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=af5f534c-bde6-4b85-8f73-bbe613827a8b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.080270662Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=6ecc199b-b5c8-4f3e-8700-8871f7347de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.080449283Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=6ecc199b-b5c8-4f3e-8700-8871f7347de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.080489529Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=6ecc199b-b5c8-4f3e-8700-8871f7347de0 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.105118388Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=95482d6c-ef1f-4238-b9c4-1a5ad9800e55 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.10527324Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=95482d6c-ef1f-4238-b9c4-1a5ad9800e55 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.105311869Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=95482d6c-ef1f-4238-b9c4-1a5ad9800e55 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.128969033Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f34309e3-3d43-4d8f-9779-28ad18957183 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.129133665Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f34309e3-3d43-4d8f-9779-28ad18957183 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.129186956Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f34309e3-3d43-4d8f-9779-28ad18957183 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:55:41 functional-389537 crio[5405]: time="2025-12-17T00:55:41.66083782Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3707f441-343b-453c-ac33-cd3630163eb1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:55:45.718156    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:45.718967    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:45.720652    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:45.721233    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:55:45.724345    9635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:55:45 up  6:38,  0 user,  load average: 0.46, 0.27, 0.71
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:55:43 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:43 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1148.
	Dec 17 00:55:43 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:43 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:43 functional-389537 kubelet[9513]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:43 functional-389537 kubelet[9513]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:43 functional-389537 kubelet[9513]: E1217 00:55:43.835331    9513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:43 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:43 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:44 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1149.
	Dec 17 00:55:44 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:44 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:44 functional-389537 kubelet[9540]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:44 functional-389537 kubelet[9540]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:44 functional-389537 kubelet[9540]: E1217 00:55:44.558386    9540 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:44 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:44 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:55:45 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1150.
	Dec 17 00:55:45 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:45 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:55:45 functional-389537 kubelet[9553]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:45 functional-389537 kubelet[9553]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 00:55:45 functional-389537 kubelet[9553]: E1217 00:55:45.324558    9553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:55:45 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:55:45 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (391.8095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:56:45.356637 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:07.911825 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:01:30.977179 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:01:45.356631 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:05:07.912301 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:06:45.356619 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.779577414s)

                                                
                                                
-- stdout --
	* [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240228s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-389537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.780963939s for "functional-389537" cluster.
I1217 01:07:59.728558 1136597 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (346.412864ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 logs -n 25: (1.165729407s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-099267 ssh pgrep buildkitd                                                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │                     │
	│ image   │ functional-099267 image ls --format yaml --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format json --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format table --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p functional-099267                                                                                                                              │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ start   │ -p functional-389537 --alsologtostderr -v=8                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:49 UTC │                     │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:latest                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add minikube-local-cache-test:functional-389537                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache delete minikube-local-cache-test:functional-389537                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl images                                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cache   │ functional-389537 cache reload                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ kubectl │ functional-389537 kubectl -- --context functional-389537 get pods                                                                                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ start   │ -p functional-389537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:55:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:55:46.994785 1176706 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:55:46.994905 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.994909 1176706 out.go:374] Setting ErrFile to fd 2...
	I1217 00:55:46.994912 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.995145 1176706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:55:46.995485 1176706 out.go:368] Setting JSON to false
	I1217 00:55:46.996300 1176706 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23897,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:55:46.996353 1176706 start.go:143] virtualization:  
	I1217 00:55:46.999868 1176706 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:55:47.003126 1176706 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:55:47.003469 1176706 notify.go:221] Checking for updates...
	I1217 00:55:47.009985 1176706 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:55:47.012797 1176706 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:55:47.015597 1176706 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:55:47.018366 1176706 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:55:47.021294 1176706 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:55:47.024608 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:47.024710 1176706 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:55:47.058976 1176706 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:55:47.059096 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.117622 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.107831529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.117708 1176706 docker.go:319] overlay module found
	I1217 00:55:47.120741 1176706 out.go:179] * Using the docker driver based on existing profile
	I1217 00:55:47.123563 1176706 start.go:309] selected driver: docker
	I1217 00:55:47.123570 1176706 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.123673 1176706 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:55:47.123773 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.174997 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.166206706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.175382 1176706 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:55:47.175411 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:47.175464 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:47.175503 1176706 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.182544 1176706 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:55:47.185443 1176706 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:55:47.188263 1176706 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:55:47.191087 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:47.191140 1176706 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:55:47.191147 1176706 cache.go:65] Caching tarball of preloaded images
	I1217 00:55:47.191162 1176706 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:55:47.191229 1176706 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:55:47.191238 1176706 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:55:47.191343 1176706 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:55:47.210444 1176706 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:55:47.210456 1176706 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:55:47.210476 1176706 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:55:47.210509 1176706 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:55:47.210571 1176706 start.go:364] duration metric: took 45.496µs to acquireMachinesLock for "functional-389537"
	I1217 00:55:47.210589 1176706 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:55:47.210598 1176706 fix.go:54] fixHost starting: 
	I1217 00:55:47.210865 1176706 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:55:47.227344 1176706 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:55:47.227372 1176706 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:55:47.230529 1176706 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:55:47.230551 1176706 machine.go:94] provisionDockerMachine start ...
	I1217 00:55:47.230646 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.247199 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.247509 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.247515 1176706 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:55:47.376058 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.376078 1176706 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:55:47.376140 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.394017 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.394338 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.394346 1176706 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:55:47.541042 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.541113 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.567770 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.568067 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.568081 1176706 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:55:47.696783 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:55:47.696798 1176706 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:55:47.696826 1176706 ubuntu.go:190] setting up certificates
	I1217 00:55:47.696844 1176706 provision.go:84] configureAuth start
	I1217 00:55:47.696911 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:47.715433 1176706 provision.go:143] copyHostCerts
	I1217 00:55:47.715503 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:55:47.715510 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:55:47.715589 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:55:47.715698 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:55:47.715703 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:55:47.715729 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:55:47.715793 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:55:47.715796 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:55:47.715819 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:55:47.715916 1176706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:55:47.936144 1176706 provision.go:177] copyRemoteCerts
	I1217 00:55:47.936198 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:55:47.936245 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.956022 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.053167 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:55:48.072266 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:55:48.091659 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:55:48.111240 1176706 provision.go:87] duration metric: took 414.372164ms to configureAuth
	I1217 00:55:48.111259 1176706 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:55:48.111463 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:48.111573 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.130165 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:48.130471 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:48.130482 1176706 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:55:48.471522 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:55:48.471533 1176706 machine.go:97] duration metric: took 1.240975938s to provisionDockerMachine
	I1217 00:55:48.471544 1176706 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:55:48.471555 1176706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:55:48.471613 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:55:48.471661 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.490121 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.584735 1176706 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:55:48.588097 1176706 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:55:48.588115 1176706 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:55:48.588125 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:55:48.588181 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:55:48.588263 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:55:48.588334 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:55:48.588376 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:55:48.596032 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:48.613682 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:55:48.631217 1176706 start.go:296] duration metric: took 159.660022ms for postStartSetup
	I1217 00:55:48.631287 1176706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:55:48.631323 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.648559 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.741603 1176706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:55:48.746366 1176706 fix.go:56] duration metric: took 1.535755013s for fixHost
	I1217 00:55:48.746384 1176706 start.go:83] releasing machines lock for "functional-389537", held for 1.535804694s
	I1217 00:55:48.746455 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:48.763224 1176706 ssh_runner.go:195] Run: cat /version.json
	I1217 00:55:48.763430 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.763750 1176706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:55:48.763808 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.786426 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.786940 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.880624 1176706 ssh_runner.go:195] Run: systemctl --version
	I1217 00:55:48.974663 1176706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:55:49.027409 1176706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:55:49.032432 1176706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:55:49.032491 1176706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:55:49.041183 1176706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:55:49.041196 1176706 start.go:496] detecting cgroup driver to use...
	I1217 00:55:49.041228 1176706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:55:49.041278 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:55:49.058264 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:55:49.077295 1176706 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:55:49.077360 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:55:49.093971 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:55:49.107900 1176706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:55:49.227935 1176706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:55:49.348723 1176706 docker.go:234] disabling docker service ...
	I1217 00:55:49.348791 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:55:49.364370 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:55:49.377769 1176706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:55:49.508111 1176706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:55:49.633558 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:55:49.646587 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:55:49.660861 1176706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:55:49.660916 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.670006 1176706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:55:49.670064 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.678812 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.687975 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.697006 1176706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:55:49.705500 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.714719 1176706 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.723320 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.732206 1176706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:55:49.740020 1176706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:55:49.747555 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:49.895105 1176706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:55:50.085156 1176706 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:55:50.085220 1176706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:55:50.089378 1176706 start.go:564] Will wait 60s for crictl version
	I1217 00:55:50.089440 1176706 ssh_runner.go:195] Run: which crictl
	I1217 00:55:50.093400 1176706 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:55:50.123005 1176706 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:55:50.123090 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.155928 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.190668 1176706 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:55:50.193712 1176706 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:55:50.210245 1176706 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:55:50.217339 1176706 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:55:50.220306 1176706 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:55:50.220479 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:50.220549 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.261117 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.261129 1176706 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:55:50.261188 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.288200 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.288211 1176706 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:55:50.288217 1176706 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:55:50.288323 1176706 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:55:50.288468 1176706 ssh_runner.go:195] Run: crio config
	I1217 00:55:50.348160 1176706 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:55:50.348190 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:50.348199 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:50.348212 1176706 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:55:50.348234 1176706 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:55:50.348361 1176706 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:55:50.348453 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:55:50.356478 1176706 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:55:50.356555 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:55:50.364296 1176706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:55:50.378459 1176706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:55:50.391769 1176706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1217 00:55:50.404843 1176706 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:55:50.408803 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:50.530281 1176706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:55:50.553453 1176706 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:55:50.553463 1176706 certs.go:195] generating shared ca certs ...
	I1217 00:55:50.553477 1176706 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:55:50.553609 1176706 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:55:50.553660 1176706 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:55:50.553666 1176706 certs.go:257] generating profile certs ...
	I1217 00:55:50.553779 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:55:50.553831 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:55:50.553877 1176706 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:55:50.553979 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:55:50.554006 1176706 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:55:50.554013 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:55:50.554039 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:55:50.554060 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:55:50.554085 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:55:50.554129 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:50.555361 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:55:50.582492 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:55:50.603683 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:55:50.621384 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:55:50.639056 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:55:50.656396 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:55:50.673796 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:55:50.690805 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:55:50.708128 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:55:50.726044 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:55:50.743273 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:55:50.763262 1176706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:55:50.777113 1176706 ssh_runner.go:195] Run: openssl version
	I1217 00:55:50.783340 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.791319 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:55:50.799039 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802914 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802970 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.844145 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:55:50.851746 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.859382 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:55:50.866837 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870628 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870686 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.912088 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:55:50.919506 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.926804 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:55:50.934239 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938447 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938514 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.979317 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:55:50.986668 1176706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:55:50.990400 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:55:51.033890 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:55:51.074982 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:55:51.116748 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:55:51.160579 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:55:51.202188 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:55:51.243239 1176706 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:51.243328 1176706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:55:51.243394 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.274971 1176706 cri.go:89] found id: ""
	I1217 00:55:51.275034 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:55:51.283750 1176706 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:55:51.283758 1176706 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:55:51.283810 1176706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:55:51.291948 1176706 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.292487 1176706 kubeconfig.go:125] found "functional-389537" server: "https://192.168.49.2:8441"
	I1217 00:55:51.293778 1176706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:55:51.304922 1176706 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:41:14.220606710 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:55:50.397867980 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:55:51.304944 1176706 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:55:51.304956 1176706 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 00:55:51.305024 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.335594 1176706 cri.go:89] found id: ""
	I1217 00:55:51.335654 1176706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:55:51.349252 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:55:51.357284 1176706 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:45 /etc/kubernetes/scheduler.conf
	
	I1217 00:55:51.357346 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:55:51.365155 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:55:51.373122 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.373177 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:55:51.380532 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.387880 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.387941 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.395488 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:55:51.402971 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.403027 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:55:51.410207 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:55:51.417914 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:51.465120 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.243254 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.461995 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.527345 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.573822 1176706 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:55:52.573908 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.074814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.574907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.075012 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.575023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.574684 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.074609 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.574663 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.074765 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.574635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.074907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.574627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.074088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.574795 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.097233 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.574961 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.074054 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.574065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.075050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.574031 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.075006 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.574216 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.074748 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.573974 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.074753 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.574034 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.075017 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.574061 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.074905 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.574698 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.074763 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.574614 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.074085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.574076 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.074847 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.574675 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.074172 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.574715 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.074369 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.574662 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.074071 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.575002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.074917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.574153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.074723 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.574433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.074632 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.574760 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.074421 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.574365 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.074110 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.574084 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.074083 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.574229 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.075007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.574915 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.074637 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.574418 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.074231 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.574859 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.074383 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.574046 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.074153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.574749 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.074247 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.574077 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.074002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.574149 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.074309 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.574050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.074975 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.574187 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.074918 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.574916 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.074771 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.574779 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.074798 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.573985 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.074834 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.574776 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.074670 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.574866 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.074740 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.574090 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.074115 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.574007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.074661 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.574687 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.074553 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.574236 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.074239 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.574036 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.074932 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.574096 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.074026 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.574255 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.074880 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.574038 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.073993 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.574088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.574323 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.074338 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.574154 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.074792 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.574063 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.074852 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.574810 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.074586 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.574043 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.075023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.574226 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.074137 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.585259 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.074119 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.573988 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.074068 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.575029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.074819 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.574056 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:52.574153 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:52.600363 1176706 cri.go:89] found id: ""
	I1217 00:56:52.600377 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.600384 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:52.600390 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:52.600466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:52.625666 1176706 cri.go:89] found id: ""
	I1217 00:56:52.625679 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.625686 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:52.625692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:52.625750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:52.651207 1176706 cri.go:89] found id: ""
	I1217 00:56:52.651220 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.651228 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:52.651233 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:52.651289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:52.675877 1176706 cri.go:89] found id: ""
	I1217 00:56:52.675891 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.675898 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:52.675904 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:52.675968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:52.705638 1176706 cri.go:89] found id: ""
	I1217 00:56:52.705651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.705658 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:52.705663 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:52.705733 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:52.734795 1176706 cri.go:89] found id: ""
	I1217 00:56:52.734809 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.734816 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:52.734821 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:52.734882 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:52.765098 1176706 cri.go:89] found id: ""
	I1217 00:56:52.765112 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.765119 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:52.765127 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:52.765138 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:52.797741 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:52.797759 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:52.872988 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:52.873007 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:52.891536 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:52.891552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:52.956983 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:52.956994 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:52.957004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.530194 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:55.540066 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:55.540129 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:55.566494 1176706 cri.go:89] found id: ""
	I1217 00:56:55.566509 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.566516 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:55.566521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:55.566579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:55.599453 1176706 cri.go:89] found id: ""
	I1217 00:56:55.599467 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.599474 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:55.599479 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:55.599539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:55.624628 1176706 cri.go:89] found id: ""
	I1217 00:56:55.624651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.624659 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:55.624664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:55.624720 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:55.650853 1176706 cri.go:89] found id: ""
	I1217 00:56:55.650867 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.650874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:55.650879 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:55.650947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:55.676274 1176706 cri.go:89] found id: ""
	I1217 00:56:55.676287 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.676295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:55.676302 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:55.676363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:55.705470 1176706 cri.go:89] found id: ""
	I1217 00:56:55.705484 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.705491 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:55.705497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:55.705577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:55.729482 1176706 cri.go:89] found id: ""
	I1217 00:56:55.729495 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.729502 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:55.729510 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:55.729520 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:55.797202 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:55.797223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:55.816424 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:55.816452 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:55.887945 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:55.887971 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:55.887984 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.962011 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:55.962032 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:58.492176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:58.503876 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:58.503952 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:58.530086 1176706 cri.go:89] found id: ""
	I1217 00:56:58.530101 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.530108 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:58.530114 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:58.530175 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:58.556063 1176706 cri.go:89] found id: ""
	I1217 00:56:58.556077 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.556084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:58.556090 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:58.556148 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:58.582188 1176706 cri.go:89] found id: ""
	I1217 00:56:58.582202 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.582209 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:58.582215 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:58.582295 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:58.607569 1176706 cri.go:89] found id: ""
	I1217 00:56:58.607583 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.607590 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:58.607595 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:58.607652 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:58.634350 1176706 cri.go:89] found id: ""
	I1217 00:56:58.634364 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.634371 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:58.634378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:58.634445 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:58.664026 1176706 cri.go:89] found id: ""
	I1217 00:56:58.664040 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.664048 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:58.664053 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:58.664114 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:58.689017 1176706 cri.go:89] found id: ""
	I1217 00:56:58.689030 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.689037 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:58.689050 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:58.689060 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:58.754795 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:58.754815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:58.775189 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:58.775206 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:58.849221 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:58.849231 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:58.849243 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:58.922086 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:58.922107 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.451030 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:01.460964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:01.461034 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:01.489661 1176706 cri.go:89] found id: ""
	I1217 00:57:01.489685 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.489693 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:01.489698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:01.489767 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:01.515445 1176706 cri.go:89] found id: ""
	I1217 00:57:01.515468 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.515476 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:01.515482 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:01.515549 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:01.540532 1176706 cri.go:89] found id: ""
	I1217 00:57:01.540546 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.540554 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:01.540560 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:01.540629 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:01.569650 1176706 cri.go:89] found id: ""
	I1217 00:57:01.569664 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.569671 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:01.569676 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:01.569738 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:01.596059 1176706 cri.go:89] found id: ""
	I1217 00:57:01.596072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.596080 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:01.596085 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:01.596140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:01.621197 1176706 cri.go:89] found id: ""
	I1217 00:57:01.621211 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.621218 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:01.621224 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:01.621282 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:01.650001 1176706 cri.go:89] found id: ""
	I1217 00:57:01.650014 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.650022 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:01.650029 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:01.650040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:01.667789 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:01.667805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:01.730637 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:01.730688 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:01.730705 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:01.804764 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:01.804783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.853135 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:01.853152 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.422102 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:04.432445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:04.432511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:04.456733 1176706 cri.go:89] found id: ""
	I1217 00:57:04.456747 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.456754 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:04.456760 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:04.456817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:04.481576 1176706 cri.go:89] found id: ""
	I1217 00:57:04.481591 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.481599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:04.481604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:04.481663 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:04.511390 1176706 cri.go:89] found id: ""
	I1217 00:57:04.511405 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.511412 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:04.511417 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:04.511481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:04.539584 1176706 cri.go:89] found id: ""
	I1217 00:57:04.539608 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.539615 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:04.539621 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:04.539686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:04.564039 1176706 cri.go:89] found id: ""
	I1217 00:57:04.564054 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.564061 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:04.564067 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:04.564126 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:04.588270 1176706 cri.go:89] found id: ""
	I1217 00:57:04.588283 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.588291 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:04.588296 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:04.588352 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:04.615420 1176706 cri.go:89] found id: ""
	I1217 00:57:04.615435 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.615442 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:04.615450 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:04.615461 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:04.648626 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:04.648647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.714893 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:04.714913 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:04.733517 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:04.733535 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:04.824195 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:04.824206 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:04.824217 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.400917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:07.410917 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:07.410975 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:07.437282 1176706 cri.go:89] found id: ""
	I1217 00:57:07.437303 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.437315 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:07.437325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:07.437414 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:07.466491 1176706 cri.go:89] found id: ""
	I1217 00:57:07.466506 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.466513 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:07.466518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:07.466585 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:07.491017 1176706 cri.go:89] found id: ""
	I1217 00:57:07.491030 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.491037 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:07.491042 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:07.491100 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:07.516269 1176706 cri.go:89] found id: ""
	I1217 00:57:07.516288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.516295 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:07.516301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:07.516370 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:07.541854 1176706 cri.go:89] found id: ""
	I1217 00:57:07.541867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.541874 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:07.541880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:07.541948 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:07.571479 1176706 cri.go:89] found id: ""
	I1217 00:57:07.571493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.571509 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:07.571516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:07.571576 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:07.597046 1176706 cri.go:89] found id: ""
	I1217 00:57:07.597072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.597079 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:07.597087 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:07.597097 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:07.672318 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:07.672336 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:07.672349 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.747576 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:07.747595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:07.779509 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:07.779525 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:07.855959 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:07.855980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.376085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:10.386576 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:10.386639 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:10.413996 1176706 cri.go:89] found id: ""
	I1217 00:57:10.414010 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.414017 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:10.414022 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:10.414082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:10.440046 1176706 cri.go:89] found id: ""
	I1217 00:57:10.440060 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.440067 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:10.440073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:10.440131 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:10.465533 1176706 cri.go:89] found id: ""
	I1217 00:57:10.465547 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.465563 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:10.465569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:10.465631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:10.491563 1176706 cri.go:89] found id: ""
	I1217 00:57:10.491577 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.491585 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:10.491590 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:10.491653 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:10.519680 1176706 cri.go:89] found id: ""
	I1217 00:57:10.519694 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.519710 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:10.519717 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:10.519778 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:10.556939 1176706 cri.go:89] found id: ""
	I1217 00:57:10.556956 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.556963 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:10.556969 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:10.557025 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:10.582061 1176706 cri.go:89] found id: ""
	I1217 00:57:10.582075 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.582082 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:10.582091 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:10.582102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:10.651854 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:10.651875 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.671002 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:10.671020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:10.744191 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:10.744201 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:10.744213 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:10.823224 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:10.823244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:13.353067 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:13.363299 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:13.363363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:13.388077 1176706 cri.go:89] found id: ""
	I1217 00:57:13.388090 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.388098 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:13.388103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:13.388166 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:13.414095 1176706 cri.go:89] found id: ""
	I1217 00:57:13.414109 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.414117 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:13.414122 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:13.414178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:13.439153 1176706 cri.go:89] found id: ""
	I1217 00:57:13.439167 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.439174 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:13.439180 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:13.439237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:13.465255 1176706 cri.go:89] found id: ""
	I1217 00:57:13.465269 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.465277 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:13.465282 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:13.465342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:13.495274 1176706 cri.go:89] found id: ""
	I1217 00:57:13.495288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.495295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:13.495301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:13.495359 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:13.520781 1176706 cri.go:89] found id: ""
	I1217 00:57:13.520795 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.520803 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:13.520808 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:13.520868 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:13.547934 1176706 cri.go:89] found id: ""
	I1217 00:57:13.547948 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.547955 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:13.547963 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:13.547974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:13.613843 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:13.613863 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:13.632465 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:13.632491 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:13.697651 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:13.697662 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:13.697673 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:13.766608 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:13.766627 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:16.302176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:16.312389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:16.312476 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:16.338447 1176706 cri.go:89] found id: ""
	I1217 00:57:16.338461 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.338468 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:16.338473 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:16.338533 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:16.365319 1176706 cri.go:89] found id: ""
	I1217 00:57:16.365333 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.365340 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:16.365346 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:16.365408 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:16.396455 1176706 cri.go:89] found id: ""
	I1217 00:57:16.396476 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.396483 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:16.396489 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:16.396550 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:16.425795 1176706 cri.go:89] found id: ""
	I1217 00:57:16.425809 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.425816 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:16.425822 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:16.425887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:16.454749 1176706 cri.go:89] found id: ""
	I1217 00:57:16.454763 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.454770 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:16.454776 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:16.454834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:16.479542 1176706 cri.go:89] found id: ""
	I1217 00:57:16.479555 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.479562 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:16.479567 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:16.479626 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:16.508783 1176706 cri.go:89] found id: ""
	I1217 00:57:16.508798 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.508805 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:16.508813 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:16.508824 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:16.577494 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:16.577515 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:16.595191 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:16.595211 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:16.665505 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:16.665516 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:16.665528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:16.733110 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:16.733132 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:19.271702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:19.282422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:19.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:19.310766 1176706 cri.go:89] found id: ""
	I1217 00:57:19.310781 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.310788 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:19.310794 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:19.310856 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:19.336393 1176706 cri.go:89] found id: ""
	I1217 00:57:19.336407 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.336435 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:19.336441 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:19.336512 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:19.363243 1176706 cri.go:89] found id: ""
	I1217 00:57:19.363258 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.363265 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:19.363270 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:19.363329 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:19.389985 1176706 cri.go:89] found id: ""
	I1217 00:57:19.390000 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.390007 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:19.390013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:19.390073 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:19.416019 1176706 cri.go:89] found id: ""
	I1217 00:57:19.416032 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.416040 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:19.416045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:19.416103 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:19.445523 1176706 cri.go:89] found id: ""
	I1217 00:57:19.445538 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.445545 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:19.445550 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:19.445611 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:19.470033 1176706 cri.go:89] found id: ""
	I1217 00:57:19.470047 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.470055 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:19.470063 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:19.470075 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:19.535642 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:19.535662 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:19.553701 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:19.553718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:19.615955 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:19.615966 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:19.615977 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:19.685077 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:19.685098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.217382 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:22.227714 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:22.227775 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:22.252242 1176706 cri.go:89] found id: ""
	I1217 00:57:22.252256 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.252263 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:22.252268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:22.252325 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:22.277476 1176706 cri.go:89] found id: ""
	I1217 00:57:22.277491 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.277498 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:22.277504 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:22.277561 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:22.302807 1176706 cri.go:89] found id: ""
	I1217 00:57:22.302821 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.302829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:22.302834 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:22.302905 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:22.332455 1176706 cri.go:89] found id: ""
	I1217 00:57:22.332469 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.332476 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:22.332483 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:22.332552 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:22.361365 1176706 cri.go:89] found id: ""
	I1217 00:57:22.361380 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.361387 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:22.361392 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:22.361453 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:22.387211 1176706 cri.go:89] found id: ""
	I1217 00:57:22.387224 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.387232 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:22.387237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:22.387297 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:22.413238 1176706 cri.go:89] found id: ""
	I1217 00:57:22.413252 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.413260 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:22.413267 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:22.413278 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:22.478085 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:22.478096 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:22.478105 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:22.546790 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:22.546813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.582711 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:22.582732 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:22.648758 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:22.648780 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.166726 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:25.177337 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:25.177400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:25.202561 1176706 cri.go:89] found id: ""
	I1217 00:57:25.202576 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.202583 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:25.202589 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:25.202650 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:25.231070 1176706 cri.go:89] found id: ""
	I1217 00:57:25.231085 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.231092 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:25.231098 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:25.231162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:25.256786 1176706 cri.go:89] found id: ""
	I1217 00:57:25.256799 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.256806 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:25.256811 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:25.256870 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:25.282392 1176706 cri.go:89] found id: ""
	I1217 00:57:25.282415 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.282423 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:25.282429 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:25.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:25.311168 1176706 cri.go:89] found id: ""
	I1217 00:57:25.311182 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.311189 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:25.311195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:25.311259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:25.339431 1176706 cri.go:89] found id: ""
	I1217 00:57:25.339446 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.339453 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:25.339459 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:25.339517 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:25.365122 1176706 cri.go:89] found id: ""
	I1217 00:57:25.365136 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.365144 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:25.365152 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:25.365162 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:25.430307 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:25.430326 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.447805 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:25.447822 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:25.515790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:25.515802 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:25.515813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:25.590022 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:25.590049 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.122003 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:28.132581 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:28.132644 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:28.158913 1176706 cri.go:89] found id: ""
	I1217 00:57:28.158927 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.158944 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:28.158950 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:28.159029 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:28.185443 1176706 cri.go:89] found id: ""
	I1217 00:57:28.185478 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.185486 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:28.185492 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:28.185565 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:28.212156 1176706 cri.go:89] found id: ""
	I1217 00:57:28.212180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.212187 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:28.212193 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:28.212303 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:28.238113 1176706 cri.go:89] found id: ""
	I1217 00:57:28.238128 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.238135 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:28.238140 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:28.238198 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:28.267252 1176706 cri.go:89] found id: ""
	I1217 00:57:28.267266 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.267273 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:28.267278 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:28.267335 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:28.299262 1176706 cri.go:89] found id: ""
	I1217 00:57:28.299277 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.299284 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:28.299290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:28.299349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:28.325216 1176706 cri.go:89] found id: ""
	I1217 00:57:28.325231 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.325247 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:28.325255 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:28.325267 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:28.342976 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:28.342992 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:28.411022 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:28.411033 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:28.411044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:28.479626 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:28.479647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.508235 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:28.508251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.075024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:31.085476 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:31.085543 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:31.115244 1176706 cri.go:89] found id: ""
	I1217 00:57:31.115259 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.115267 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:31.115272 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:31.115332 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:31.146093 1176706 cri.go:89] found id: ""
	I1217 00:57:31.146111 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.146119 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:31.146125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:31.146188 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:31.173490 1176706 cri.go:89] found id: ""
	I1217 00:57:31.173505 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.173512 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:31.173518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:31.173577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:31.199862 1176706 cri.go:89] found id: ""
	I1217 00:57:31.199876 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.199883 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:31.199889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:31.199953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:31.229151 1176706 cri.go:89] found id: ""
	I1217 00:57:31.229164 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.229172 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:31.229177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:31.229234 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:31.255292 1176706 cri.go:89] found id: ""
	I1217 00:57:31.255306 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.255313 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:31.255319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:31.255378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:31.280011 1176706 cri.go:89] found id: ""
	I1217 00:57:31.280024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.280032 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:31.280040 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:31.280050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:31.351624 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:31.351644 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:31.380210 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:31.380226 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.448265 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:31.448288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:31.466144 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:31.466161 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:31.530079 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.030804 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:34.041923 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:34.041984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:34.070601 1176706 cri.go:89] found id: ""
	I1217 00:57:34.070617 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.070624 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:34.070630 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:34.070689 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:34.097552 1176706 cri.go:89] found id: ""
	I1217 00:57:34.097566 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.097573 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:34.097579 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:34.097647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:34.124476 1176706 cri.go:89] found id: ""
	I1217 00:57:34.124490 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.124497 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:34.124503 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:34.124580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:34.150077 1176706 cri.go:89] found id: ""
	I1217 00:57:34.150091 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.150099 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:34.150104 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:34.150162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:34.176964 1176706 cri.go:89] found id: ""
	I1217 00:57:34.176978 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.176992 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:34.176998 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:34.177055 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:34.201831 1176706 cri.go:89] found id: ""
	I1217 00:57:34.201845 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.201852 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:34.201857 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:34.201914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:34.227100 1176706 cri.go:89] found id: ""
	I1217 00:57:34.227114 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.227122 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:34.227129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:34.227140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:34.292098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.292108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:34.292119 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:34.361262 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:34.361287 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:34.395072 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:34.395087 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:34.462475 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:34.462498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:36.980702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:36.992944 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:36.993003 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:37.025574 1176706 cri.go:89] found id: ""
	I1217 00:57:37.025592 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.025616 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:37.025622 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:37.025707 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:37.054876 1176706 cri.go:89] found id: ""
	I1217 00:57:37.054890 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.054897 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:37.054903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:37.054968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:37.084974 1176706 cri.go:89] found id: ""
	I1217 00:57:37.084987 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.084995 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:37.085000 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:37.085059 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:37.110853 1176706 cri.go:89] found id: ""
	I1217 00:57:37.110867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.110874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:37.110883 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:37.110941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:37.137064 1176706 cri.go:89] found id: ""
	I1217 00:57:37.137083 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.137090 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:37.137096 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:37.137159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:37.167116 1176706 cri.go:89] found id: ""
	I1217 00:57:37.167130 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.167148 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:37.167162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:37.167230 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:37.192827 1176706 cri.go:89] found id: ""
	I1217 00:57:37.192848 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.192856 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:37.192863 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:37.192874 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:37.210956 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:37.210974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:37.275882 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:37.275893 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:37.275904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:37.344194 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:37.344215 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:37.375642 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:37.375658 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:39.944605 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:39.954951 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:39.955014 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:39.984302 1176706 cri.go:89] found id: ""
	I1217 00:57:39.984316 1176706 logs.go:282] 0 containers: []
	W1217 00:57:39.984323 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:39.984328 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:39.984383 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:40.029442 1176706 cri.go:89] found id: ""
	I1217 00:57:40.029458 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.029466 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:40.029471 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:40.029538 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:40.063022 1176706 cri.go:89] found id: ""
	I1217 00:57:40.063037 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.063044 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:40.063049 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:40.063110 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:40.094257 1176706 cri.go:89] found id: ""
	I1217 00:57:40.094272 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.094280 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:40.094286 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:40.094349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:40.127887 1176706 cri.go:89] found id: ""
	I1217 00:57:40.127901 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.127908 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:40.127913 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:40.127972 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:40.155475 1176706 cri.go:89] found id: ""
	I1217 00:57:40.155489 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.155496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:40.155502 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:40.155560 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:40.181940 1176706 cri.go:89] found id: ""
	I1217 00:57:40.181955 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.181962 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:40.181970 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:40.181980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:40.254464 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:40.254484 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:40.285810 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:40.285825 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:40.352509 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:40.352528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:40.370334 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:40.370356 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:40.432624 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:42.932898 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:42.943186 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:42.943245 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:42.971121 1176706 cri.go:89] found id: ""
	I1217 00:57:42.971137 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.971144 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:42.971149 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:42.971207 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:42.997154 1176706 cri.go:89] found id: ""
	I1217 00:57:42.997169 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.997175 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:42.997181 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:42.997240 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:43.034752 1176706 cri.go:89] found id: ""
	I1217 00:57:43.034767 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.034775 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:43.034781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:43.034840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:43.064326 1176706 cri.go:89] found id: ""
	I1217 00:57:43.064339 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.064347 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:43.064352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:43.064428 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:43.095997 1176706 cri.go:89] found id: ""
	I1217 00:57:43.096011 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.096019 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:43.096024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:43.096082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:43.126545 1176706 cri.go:89] found id: ""
	I1217 00:57:43.126560 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.126568 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:43.126573 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:43.126633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:43.157043 1176706 cri.go:89] found id: ""
	I1217 00:57:43.157058 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.157065 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:43.157073 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:43.157102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:43.223228 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:43.223248 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:43.241053 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:43.241070 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:43.307388 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:43.307398 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:43.307409 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:43.376649 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:43.376669 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:45.908814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:45.918992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:45.919051 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:45.944157 1176706 cri.go:89] found id: ""
	I1217 00:57:45.944170 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.944178 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:45.944183 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:45.944242 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:45.969417 1176706 cri.go:89] found id: ""
	I1217 00:57:45.969431 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.969438 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:45.969444 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:45.969502 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:45.995472 1176706 cri.go:89] found id: ""
	I1217 00:57:45.995486 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.995494 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:45.995499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:45.995566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:46.034994 1176706 cri.go:89] found id: ""
	I1217 00:57:46.035007 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.035015 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:46.035020 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:46.035081 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:46.065460 1176706 cri.go:89] found id: ""
	I1217 00:57:46.065473 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.065480 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:46.065486 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:46.065559 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:46.092450 1176706 cri.go:89] found id: ""
	I1217 00:57:46.092465 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.092472 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:46.092478 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:46.092557 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:46.122198 1176706 cri.go:89] found id: ""
	I1217 00:57:46.122212 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.122221 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:46.122229 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:46.122241 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:46.140129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:46.140147 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:46.204790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:46.204800 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:46.204810 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:46.273034 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:46.273054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:46.300763 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:46.300778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:48.875764 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:48.886304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:48.886369 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:48.923231 1176706 cri.go:89] found id: ""
	I1217 00:57:48.923246 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.923254 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:48.923259 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:48.923334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:48.951521 1176706 cri.go:89] found id: ""
	I1217 00:57:48.951536 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.951544 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:48.951549 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:48.951610 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:48.977574 1176706 cri.go:89] found id: ""
	I1217 00:57:48.977588 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.977595 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:48.977600 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:48.977661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:49.016389 1176706 cri.go:89] found id: ""
	I1217 00:57:49.016402 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.016410 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:49.016446 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:49.016511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:49.050180 1176706 cri.go:89] found id: ""
	I1217 00:57:49.050193 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.050201 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:49.050206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:49.050271 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:49.088387 1176706 cri.go:89] found id: ""
	I1217 00:57:49.088401 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.088409 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:49.088445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:49.088508 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:49.118579 1176706 cri.go:89] found id: ""
	I1217 00:57:49.118593 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.118600 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:49.118608 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:49.118618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:49.189917 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:49.189938 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:49.208217 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:49.208234 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:49.270961 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:49.270977 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:49.270988 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:49.340033 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:49.340054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:51.873428 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:51.883781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:51.883840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:51.908479 1176706 cri.go:89] found id: ""
	I1217 00:57:51.908493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.908500 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:51.908505 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:51.908562 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:51.938045 1176706 cri.go:89] found id: ""
	I1217 00:57:51.938061 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.938068 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:51.938073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:51.938135 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:51.964570 1176706 cri.go:89] found id: ""
	I1217 00:57:51.964585 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.964592 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:51.964597 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:51.964654 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:51.989700 1176706 cri.go:89] found id: ""
	I1217 00:57:51.989714 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.989722 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:51.989727 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:51.989784 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:52.030756 1176706 cri.go:89] found id: ""
	I1217 00:57:52.030771 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.030779 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:52.030786 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:52.030860 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:52.067806 1176706 cri.go:89] found id: ""
	I1217 00:57:52.067829 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.067838 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:52.067845 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:52.067915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:52.097071 1176706 cri.go:89] found id: ""
	I1217 00:57:52.097102 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.097110 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:52.097118 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:52.097128 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:52.169931 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:52.169952 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:52.202012 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:52.202031 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:52.267897 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:52.267917 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:52.286898 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:52.286920 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:52.352095 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:54.853773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:54.863649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:54.863712 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:54.888435 1176706 cri.go:89] found id: ""
	I1217 00:57:54.888449 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.888456 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:54.888462 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:54.888523 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:54.927009 1176706 cri.go:89] found id: ""
	I1217 00:57:54.927024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.927031 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:54.927037 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:54.927095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:54.953405 1176706 cri.go:89] found id: ""
	I1217 00:57:54.953420 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.953428 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:54.953434 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:54.953493 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:54.979162 1176706 cri.go:89] found id: ""
	I1217 00:57:54.979176 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.979183 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:54.979189 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:54.979256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:55.025542 1176706 cri.go:89] found id: ""
	I1217 00:57:55.025564 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.025572 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:55.025577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:55.025641 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:55.059408 1176706 cri.go:89] found id: ""
	I1217 00:57:55.059422 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.059429 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:55.059435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:55.059492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:55.085846 1176706 cri.go:89] found id: ""
	I1217 00:57:55.085860 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.085867 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:55.085875 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:55.085884 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:55.154061 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:55.154083 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:55.182650 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:55.182667 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:55.252924 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:55.252945 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:55.271464 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:55.271481 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:55.340175 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:57.840461 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:57.853057 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:57.853178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:57.883066 1176706 cri.go:89] found id: ""
	I1217 00:57:57.883081 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.883088 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:57.883094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:57.883152 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:57.909166 1176706 cri.go:89] found id: ""
	I1217 00:57:57.909180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.909189 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:57.909195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:57.909255 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:57.935701 1176706 cri.go:89] found id: ""
	I1217 00:57:57.935716 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.935733 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:57.935739 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:57.935805 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:57.969374 1176706 cri.go:89] found id: ""
	I1217 00:57:57.969397 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.969404 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:57.969410 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:57.969481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:57.995365 1176706 cri.go:89] found id: ""
	I1217 00:57:57.995379 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.995397 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:57.995404 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:57.995460 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:58.025187 1176706 cri.go:89] found id: ""
	I1217 00:57:58.025207 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.025215 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:58.025221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:58.025343 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:58.062705 1176706 cri.go:89] found id: ""
	I1217 00:57:58.062719 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.062738 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:58.062745 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:58.062755 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:58.135108 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:58.135129 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:58.154038 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:58.154058 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:58.219558 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:58.219569 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:58.219582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:58.287658 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:58.287678 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:00.817470 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:00.827992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:00.828056 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:00.852955 1176706 cri.go:89] found id: ""
	I1217 00:58:00.852969 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.852976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:00.852983 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:00.853043 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:00.877725 1176706 cri.go:89] found id: ""
	I1217 00:58:00.877739 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.877746 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:00.877751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:00.877811 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:00.901883 1176706 cri.go:89] found id: ""
	I1217 00:58:00.901897 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.901905 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:00.901910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:00.901965 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:00.928695 1176706 cri.go:89] found id: ""
	I1217 00:58:00.928709 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.928716 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:00.928722 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:00.928780 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:00.953517 1176706 cri.go:89] found id: ""
	I1217 00:58:00.953531 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.953538 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:00.953544 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:00.953601 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:00.982916 1176706 cri.go:89] found id: ""
	I1217 00:58:00.982930 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.982946 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:00.982952 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:00.983021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:01.012486 1176706 cri.go:89] found id: ""
	I1217 00:58:01.012510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:01.012518 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:01.012526 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:01.012538 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:01.034573 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:01.034595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:01.107160 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:01.107170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:01.107180 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:01.180136 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:01.180158 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:01.212434 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:01.212451 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:03.780773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:03.791245 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:03.791309 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:03.819281 1176706 cri.go:89] found id: ""
	I1217 00:58:03.819296 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.819304 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:03.819309 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:03.819367 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:03.847330 1176706 cri.go:89] found id: ""
	I1217 00:58:03.847344 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.847351 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:03.847357 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:03.847416 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:03.874793 1176706 cri.go:89] found id: ""
	I1217 00:58:03.874806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.874814 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:03.874819 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:03.874883 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:03.904651 1176706 cri.go:89] found id: ""
	I1217 00:58:03.904665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.904672 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:03.904678 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:03.904744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:03.930157 1176706 cri.go:89] found id: ""
	I1217 00:58:03.930178 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.930186 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:03.930191 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:03.930252 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:03.960348 1176706 cri.go:89] found id: ""
	I1217 00:58:03.960371 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.960380 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:03.960386 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:03.960473 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:03.985501 1176706 cri.go:89] found id: ""
	I1217 00:58:03.985515 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.985523 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:03.985530 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:03.985541 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:04.005563 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:04.005592 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:04.085204 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:04.085219 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:04.085231 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:04.154363 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:04.154385 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:04.182481 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:04.182498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:06.754413 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:06.765192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:06.765266 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:06.792761 1176706 cri.go:89] found id: ""
	I1217 00:58:06.792779 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.792786 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:06.792791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:06.792850 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:06.817882 1176706 cri.go:89] found id: ""
	I1217 00:58:06.817896 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.817903 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:06.817909 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:06.817967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:06.843295 1176706 cri.go:89] found id: ""
	I1217 00:58:06.843309 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.843316 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:06.843321 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:06.843380 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:06.871025 1176706 cri.go:89] found id: ""
	I1217 00:58:06.871039 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.871046 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:06.871052 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:06.871109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:06.899109 1176706 cri.go:89] found id: ""
	I1217 00:58:06.899124 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.899132 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:06.899137 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:06.899212 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:06.923948 1176706 cri.go:89] found id: ""
	I1217 00:58:06.923962 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.923980 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:06.923987 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:06.924045 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:06.948813 1176706 cri.go:89] found id: ""
	I1217 00:58:06.948827 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.948834 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:06.948842 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:06.948853 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:07.015114 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:07.015140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:07.034991 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:07.035010 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:07.105757 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:07.105767 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:07.105778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:07.177693 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:07.177717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:09.709755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:09.720409 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:09.720507 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:09.745603 1176706 cri.go:89] found id: ""
	I1217 00:58:09.745618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.745626 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:09.745631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:09.745691 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:09.775492 1176706 cri.go:89] found id: ""
	I1217 00:58:09.775507 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.775515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:09.775520 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:09.775579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:09.801149 1176706 cri.go:89] found id: ""
	I1217 00:58:09.801164 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.801171 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:09.801177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:09.801238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:09.830147 1176706 cri.go:89] found id: ""
	I1217 00:58:09.830160 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.830168 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:09.830173 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:09.830232 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:09.858791 1176706 cri.go:89] found id: ""
	I1217 00:58:09.858806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.858825 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:09.858832 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:09.858911 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:09.884827 1176706 cri.go:89] found id: ""
	I1217 00:58:09.884842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.884849 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:09.884855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:09.884918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:09.910380 1176706 cri.go:89] found id: ""
	I1217 00:58:09.910394 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.910402 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:09.910409 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:09.910420 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:09.976905 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:09.976924 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:09.995004 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:09.995027 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:10.084593 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:10.084604 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:10.084614 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:10.157583 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:10.157604 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.691225 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:12.701275 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:12.701340 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:12.730986 1176706 cri.go:89] found id: ""
	I1217 00:58:12.731000 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.731018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:12.731024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:12.731084 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:12.757010 1176706 cri.go:89] found id: ""
	I1217 00:58:12.757029 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.757037 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:12.757045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:12.757119 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:12.782232 1176706 cri.go:89] found id: ""
	I1217 00:58:12.782245 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.782252 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:12.782257 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:12.782314 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:12.808352 1176706 cri.go:89] found id: ""
	I1217 00:58:12.808366 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.808373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:12.808378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:12.808472 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:12.834094 1176706 cri.go:89] found id: ""
	I1217 00:58:12.834109 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.834116 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:12.834121 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:12.834184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:12.861537 1176706 cri.go:89] found id: ""
	I1217 00:58:12.861551 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.861558 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:12.861564 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:12.861625 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:12.891320 1176706 cri.go:89] found id: ""
	I1217 00:58:12.891334 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.891351 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:12.891360 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:12.891373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:12.961252 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:12.961272 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.990873 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:12.990889 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:13.068166 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:13.068185 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:13.087641 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:13.087660 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:13.158967 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:15.660635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:15.670593 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:15.670685 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:15.695674 1176706 cri.go:89] found id: ""
	I1217 00:58:15.695688 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.695695 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:15.695700 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:15.695757 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:15.723007 1176706 cri.go:89] found id: ""
	I1217 00:58:15.723020 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.723028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:15.723033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:15.723093 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:15.752134 1176706 cri.go:89] found id: ""
	I1217 00:58:15.752149 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.752156 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:15.752161 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:15.752219 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:15.777521 1176706 cri.go:89] found id: ""
	I1217 00:58:15.777535 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.777542 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:15.777547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:15.777606 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:15.805205 1176706 cri.go:89] found id: ""
	I1217 00:58:15.805220 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.805233 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:15.805239 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:15.805296 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:15.830102 1176706 cri.go:89] found id: ""
	I1217 00:58:15.830116 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.830123 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:15.830129 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:15.830191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:15.859258 1176706 cri.go:89] found id: ""
	I1217 00:58:15.859272 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.859279 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:15.859297 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:15.859307 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:15.924910 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:15.924930 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:15.943203 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:15.943219 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:16.011016 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:16.011027 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:16.011038 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:16.094076 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:16.094096 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:18.624032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:18.634861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:18.634925 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:18.660502 1176706 cri.go:89] found id: ""
	I1217 00:58:18.660528 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.660536 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:18.660541 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:18.660600 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:18.685828 1176706 cri.go:89] found id: ""
	I1217 00:58:18.685841 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.685848 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:18.685854 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:18.685920 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:18.716173 1176706 cri.go:89] found id: ""
	I1217 00:58:18.716187 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.716194 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:18.716199 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:18.716260 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:18.742960 1176706 cri.go:89] found id: ""
	I1217 00:58:18.742975 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.742983 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:18.742988 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:18.743046 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:18.768597 1176706 cri.go:89] found id: ""
	I1217 00:58:18.768610 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.768623 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:18.768628 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:18.768687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:18.795244 1176706 cri.go:89] found id: ""
	I1217 00:58:18.795267 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.795276 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:18.795281 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:18.795355 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:18.826316 1176706 cri.go:89] found id: ""
	I1217 00:58:18.826330 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.826337 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:18.826345 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:18.826354 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:18.892936 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:18.892954 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:18.911274 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:18.911292 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:18.973399 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:18.973409 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:18.973432 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:19.052103 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:19.052124 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.589056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:21.599320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:21.599382 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:21.626547 1176706 cri.go:89] found id: ""
	I1217 00:58:21.626561 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.626568 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:21.626574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:21.626631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:21.651881 1176706 cri.go:89] found id: ""
	I1217 00:58:21.651895 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.651902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:21.651910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:21.651967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:21.677496 1176706 cri.go:89] found id: ""
	I1217 00:58:21.677510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.677519 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:21.677524 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:21.677580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:21.701536 1176706 cri.go:89] found id: ""
	I1217 00:58:21.701550 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.701557 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:21.701562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:21.701619 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:21.725663 1176706 cri.go:89] found id: ""
	I1217 00:58:21.725677 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.725695 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:21.725701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:21.725772 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:21.749912 1176706 cri.go:89] found id: ""
	I1217 00:58:21.749926 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.749937 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:21.749943 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:21.750000 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:21.774360 1176706 cri.go:89] found id: ""
	I1217 00:58:21.774374 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.774381 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:21.774389 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:21.774399 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:21.841964 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:21.841983 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.870200 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:21.870218 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:21.943734 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:21.943754 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:21.961798 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:21.961816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:22.037147 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.537433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:24.547596 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:24.547661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:24.575282 1176706 cri.go:89] found id: ""
	I1217 00:58:24.575297 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.575306 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:24.575312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:24.575371 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:24.600578 1176706 cri.go:89] found id: ""
	I1217 00:58:24.600592 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.600599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:24.600604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:24.600665 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:24.626604 1176706 cri.go:89] found id: ""
	I1217 00:58:24.626618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.626626 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:24.626631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:24.626687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:24.652284 1176706 cri.go:89] found id: ""
	I1217 00:58:24.652298 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.652316 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:24.652323 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:24.652381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:24.681413 1176706 cri.go:89] found id: ""
	I1217 00:58:24.681426 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.681433 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:24.681439 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:24.681495 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:24.709801 1176706 cri.go:89] found id: ""
	I1217 00:58:24.709815 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.709822 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:24.709830 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:24.709887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:24.741982 1176706 cri.go:89] found id: ""
	I1217 00:58:24.741995 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.742010 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:24.742018 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:24.742029 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:24.806559 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.806571 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:24.806581 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:24.875943 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:24.875962 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:24.904944 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:24.904960 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:24.972857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:24.972878 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.491741 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:27.502162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:27.502241 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:27.528328 1176706 cri.go:89] found id: ""
	I1217 00:58:27.528343 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.528350 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:27.528356 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:27.528455 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:27.558520 1176706 cri.go:89] found id: ""
	I1217 00:58:27.558534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.558541 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:27.558547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:27.558605 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:27.587047 1176706 cri.go:89] found id: ""
	I1217 00:58:27.587061 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.587070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:27.587075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:27.587133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:27.615351 1176706 cri.go:89] found id: ""
	I1217 00:58:27.615365 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.615373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:27.615381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:27.615443 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:27.640936 1176706 cri.go:89] found id: ""
	I1217 00:58:27.640950 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.640959 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:27.640964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:27.641021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:27.667985 1176706 cri.go:89] found id: ""
	I1217 00:58:27.667999 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.668007 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:27.668013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:27.668077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:27.694148 1176706 cri.go:89] found id: ""
	I1217 00:58:27.694162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.694170 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:27.694177 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:27.694188 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:27.764618 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:27.764639 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.784025 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:27.784040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:27.852310 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:27.852320 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:27.852331 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:27.922044 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:27.922065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:30.450766 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:30.460791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:30.460852 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:30.485997 1176706 cri.go:89] found id: ""
	I1217 00:58:30.486011 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.486018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:30.486023 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:30.486080 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:30.512125 1176706 cri.go:89] found id: ""
	I1217 00:58:30.512138 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.512157 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:30.512163 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:30.512221 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:30.538512 1176706 cri.go:89] found id: ""
	I1217 00:58:30.538526 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.538533 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:30.538539 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:30.538597 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:30.564757 1176706 cri.go:89] found id: ""
	I1217 00:58:30.564771 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.564778 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:30.564784 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:30.564842 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:30.594808 1176706 cri.go:89] found id: ""
	I1217 00:58:30.594821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.594840 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:30.594846 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:30.594919 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:30.624595 1176706 cri.go:89] found id: ""
	I1217 00:58:30.624609 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.624617 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:30.624623 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:30.624683 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:30.653013 1176706 cri.go:89] found id: ""
	I1217 00:58:30.653027 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.653034 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:30.653042 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:30.653052 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:30.720030 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:30.720050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:30.738237 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:30.738255 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:30.801692 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:30.801705 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:30.801717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:30.870606 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:30.870628 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.401439 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:33.411804 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:33.411865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:33.437664 1176706 cri.go:89] found id: ""
	I1217 00:58:33.437678 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.437686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:33.437692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:33.437752 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:33.463774 1176706 cri.go:89] found id: ""
	I1217 00:58:33.463796 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.463803 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:33.463809 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:33.463865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:33.492800 1176706 cri.go:89] found id: ""
	I1217 00:58:33.492822 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.492829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:33.492835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:33.492896 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:33.518396 1176706 cri.go:89] found id: ""
	I1217 00:58:33.518410 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.518417 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:33.518422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:33.518481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:33.545369 1176706 cri.go:89] found id: ""
	I1217 00:58:33.545385 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.545393 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:33.545398 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:33.545469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:33.571642 1176706 cri.go:89] found id: ""
	I1217 00:58:33.571665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.571673 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:33.571679 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:33.571751 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:33.598928 1176706 cri.go:89] found id: ""
	I1217 00:58:33.598953 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.598961 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:33.598970 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:33.598980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:33.617218 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:33.617237 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:33.681042 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:33.681053 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:33.681064 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:33.750561 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:33.750582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.779618 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:33.779637 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.351872 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:36.361748 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:36.361812 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:36.387484 1176706 cri.go:89] found id: ""
	I1217 00:58:36.387498 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.387505 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:36.387511 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:36.387567 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:36.413880 1176706 cri.go:89] found id: ""
	I1217 00:58:36.413894 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.413902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:36.413922 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:36.413979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:36.439073 1176706 cri.go:89] found id: ""
	I1217 00:58:36.439087 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.439095 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:36.439100 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:36.439159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:36.464148 1176706 cri.go:89] found id: ""
	I1217 00:58:36.464162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.464169 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:36.464175 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:36.464237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:36.489659 1176706 cri.go:89] found id: ""
	I1217 00:58:36.489673 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.489681 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:36.489686 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:36.489744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:36.514865 1176706 cri.go:89] found id: ""
	I1217 00:58:36.514879 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.514887 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:36.514892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:36.514953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:36.545081 1176706 cri.go:89] found id: ""
	I1217 00:58:36.545095 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.545103 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:36.545110 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:36.545120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:36.620571 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:36.620599 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:36.652294 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:36.652313 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.720685 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:36.720708 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:36.738692 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:36.738709 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:36.804409 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.304571 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:39.315407 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:39.315469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:39.343748 1176706 cri.go:89] found id: ""
	I1217 00:58:39.343762 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.343769 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:39.343775 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:39.343834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:39.371633 1176706 cri.go:89] found id: ""
	I1217 00:58:39.371648 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.371655 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:39.371661 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:39.371750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:39.397168 1176706 cri.go:89] found id: ""
	I1217 00:58:39.397183 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.397190 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:39.397196 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:39.397254 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:39.422379 1176706 cri.go:89] found id: ""
	I1217 00:58:39.422393 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.422400 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:39.422406 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:39.422466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:39.451362 1176706 cri.go:89] found id: ""
	I1217 00:58:39.451376 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.451384 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:39.451389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:39.451447 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:39.476838 1176706 cri.go:89] found id: ""
	I1217 00:58:39.476852 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.476862 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:39.476867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:39.476926 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:39.501892 1176706 cri.go:89] found id: ""
	I1217 00:58:39.501905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.501912 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:39.501924 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:39.501933 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:39.571771 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.571783 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:39.571793 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:39.642123 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:39.642144 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:39.673585 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:39.673602 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:39.742217 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:39.742236 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.260825 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:42.274064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:42.274140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:42.315322 1176706 cri.go:89] found id: ""
	I1217 00:58:42.315336 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.315346 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:42.315352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:42.315432 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:42.348891 1176706 cri.go:89] found id: ""
	I1217 00:58:42.348906 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.348914 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:42.348920 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:42.348984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:42.376853 1176706 cri.go:89] found id: ""
	I1217 00:58:42.376867 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.376874 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:42.376880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:42.376940 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:42.402292 1176706 cri.go:89] found id: ""
	I1217 00:58:42.402307 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.402315 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:42.402320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:42.402381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:42.432293 1176706 cri.go:89] found id: ""
	I1217 00:58:42.432306 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.432314 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:42.432319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:42.432378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:42.459173 1176706 cri.go:89] found id: ""
	I1217 00:58:42.459188 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.459195 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:42.459200 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:42.459259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:42.485520 1176706 cri.go:89] found id: ""
	I1217 00:58:42.485534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.485541 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:42.485549 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:42.485562 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:42.553260 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:42.553281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.571244 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:42.571261 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:42.633598 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:42.633609 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:42.633622 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:42.706387 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:42.706408 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.237565 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:45.259348 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:45.259429 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:45.305574 1176706 cri.go:89] found id: ""
	I1217 00:58:45.305589 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.305597 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:45.305602 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:45.305664 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:45.349162 1176706 cri.go:89] found id: ""
	I1217 00:58:45.349177 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.349187 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:45.349192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:45.349256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:45.376828 1176706 cri.go:89] found id: ""
	I1217 00:58:45.376842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.376849 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:45.376855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:45.376915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:45.402470 1176706 cri.go:89] found id: ""
	I1217 00:58:45.402485 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.402492 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:45.402497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:45.402554 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:45.429756 1176706 cri.go:89] found id: ""
	I1217 00:58:45.429790 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.429820 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:45.429842 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:45.429980 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:45.459618 1176706 cri.go:89] found id: ""
	I1217 00:58:45.459632 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.459640 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:45.459647 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:45.459709 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:45.486504 1176706 cri.go:89] found id: ""
	I1217 00:58:45.486518 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.486526 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:45.486533 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:45.486549 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:45.505026 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:45.505044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:45.569592 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:45.569602 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:45.569612 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:45.642249 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:45.642270 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.673783 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:45.673799 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.241441 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:48.253986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:48.254052 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:48.298549 1176706 cri.go:89] found id: ""
	I1217 00:58:48.298562 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.298569 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:48.298575 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:48.298633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:48.329982 1176706 cri.go:89] found id: ""
	I1217 00:58:48.329997 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.330004 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:48.330010 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:48.330068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:48.356278 1176706 cri.go:89] found id: ""
	I1217 00:58:48.356291 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.356298 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:48.356304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:48.356363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:48.381931 1176706 cri.go:89] found id: ""
	I1217 00:58:48.381944 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.381952 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:48.381957 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:48.382012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:48.408076 1176706 cri.go:89] found id: ""
	I1217 00:58:48.408091 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.408098 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:48.408103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:48.408167 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:48.438515 1176706 cri.go:89] found id: ""
	I1217 00:58:48.438529 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.438536 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:48.438542 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:48.438615 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:48.464771 1176706 cri.go:89] found id: ""
	I1217 00:58:48.464784 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.464791 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:48.464800 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:48.464815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.531756 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:48.531777 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:48.550180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:48.550197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:48.614503 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:48.614514 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:48.614524 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:48.683497 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:48.683519 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:51.214024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:51.224516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:51.224581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:51.250104 1176706 cri.go:89] found id: ""
	I1217 00:58:51.250118 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.250125 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:51.250131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:51.250204 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:51.287241 1176706 cri.go:89] found id: ""
	I1217 00:58:51.287255 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.287263 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:51.287268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:51.287334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:51.326285 1176706 cri.go:89] found id: ""
	I1217 00:58:51.326299 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.326306 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:51.326312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:51.326375 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:51.353495 1176706 cri.go:89] found id: ""
	I1217 00:58:51.353509 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.353516 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:51.353521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:51.353577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:51.379404 1176706 cri.go:89] found id: ""
	I1217 00:58:51.379417 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.379425 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:51.379430 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:51.379489 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:51.405891 1176706 cri.go:89] found id: ""
	I1217 00:58:51.405905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.405912 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:51.405919 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:51.405979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:51.431497 1176706 cri.go:89] found id: ""
	I1217 00:58:51.431510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.431529 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:51.431537 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:51.431547 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:51.497786 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:51.497805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:51.516101 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:51.516120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:51.584128 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:51.584139 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:51.584150 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:51.652739 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:51.652760 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:54.182755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:54.194058 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:54.194127 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:54.219806 1176706 cri.go:89] found id: ""
	I1217 00:58:54.219821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.219828 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:54.219833 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:54.219894 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:54.245268 1176706 cri.go:89] found id: ""
	I1217 00:58:54.245281 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.245289 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:54.245294 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:54.245353 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:54.278676 1176706 cri.go:89] found id: ""
	I1217 00:58:54.278690 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.278697 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:54.278703 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:54.278766 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:54.305307 1176706 cri.go:89] found id: ""
	I1217 00:58:54.305321 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.305329 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:54.305334 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:54.305400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:54.330666 1176706 cri.go:89] found id: ""
	I1217 00:58:54.330680 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.330688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:54.330693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:54.330763 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:54.356855 1176706 cri.go:89] found id: ""
	I1217 00:58:54.356875 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.356886 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:54.356892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:54.356985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:54.390389 1176706 cri.go:89] found id: ""
	I1217 00:58:54.390404 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.390411 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:54.390419 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:54.390429 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:54.456633 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:54.456654 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:54.474716 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:54.474734 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:54.542032 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:54.542052 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:54.542063 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:54.614689 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:54.614710 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:57.146377 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:57.156881 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:57.156942 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:57.181785 1176706 cri.go:89] found id: ""
	I1217 00:58:57.181800 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.181808 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:57.181813 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:57.181869 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:57.208021 1176706 cri.go:89] found id: ""
	I1217 00:58:57.208046 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.208059 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:57.208065 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:57.208133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:57.235483 1176706 cri.go:89] found id: ""
	I1217 00:58:57.235497 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.235505 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:57.235510 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:57.235569 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:57.269950 1176706 cri.go:89] found id: ""
	I1217 00:58:57.269972 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.269980 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:57.269986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:57.270063 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:57.296896 1176706 cri.go:89] found id: ""
	I1217 00:58:57.296911 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.296918 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:57.296924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:57.296983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:57.325435 1176706 cri.go:89] found id: ""
	I1217 00:58:57.325452 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.325462 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:57.325468 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:57.325526 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:57.350942 1176706 cri.go:89] found id: ""
	I1217 00:58:57.350957 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.350965 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:57.350973 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:57.350982 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:57.416866 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:57.416886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:57.434717 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:57.434736 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:57.499393 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:57.499403 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:57.499414 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:57.567648 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:57.567668 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:00.097029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:00.143893 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:00.143993 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:00.260647 1176706 cri.go:89] found id: ""
	I1217 00:59:00.262402 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.262438 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:00.262449 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:00.262564 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:00.330718 1176706 cri.go:89] found id: ""
	I1217 00:59:00.330734 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.330745 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:00.330751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:00.330862 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:00.372603 1176706 cri.go:89] found id: ""
	I1217 00:59:00.372630 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.372638 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:00.372645 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:00.372721 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:00.403443 1176706 cri.go:89] found id: ""
	I1217 00:59:00.403469 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.403478 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:00.403484 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:00.403558 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:00.432235 1176706 cri.go:89] found id: ""
	I1217 00:59:00.432260 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.432268 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:00.432274 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:00.432341 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:00.464475 1176706 cri.go:89] found id: ""
	I1217 00:59:00.464489 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.464496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:00.464501 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:00.464563 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:00.494126 1176706 cri.go:89] found id: ""
	I1217 00:59:00.494156 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.494164 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:00.494172 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:00.494182 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:00.564811 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:00.564831 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:00.582720 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:00.582738 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:00.643909 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:00.643921 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:00.643931 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:00.716875 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:00.716895 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.245660 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:03.256968 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:03.257032 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:03.283953 1176706 cri.go:89] found id: ""
	I1217 00:59:03.283968 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.283976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:03.283981 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:03.284041 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:03.313015 1176706 cri.go:89] found id: ""
	I1217 00:59:03.313029 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.313036 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:03.313041 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:03.313098 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:03.341219 1176706 cri.go:89] found id: ""
	I1217 00:59:03.341233 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.341241 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:03.341246 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:03.341304 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:03.366417 1176706 cri.go:89] found id: ""
	I1217 00:59:03.366430 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.366437 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:03.366443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:03.366499 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:03.395548 1176706 cri.go:89] found id: ""
	I1217 00:59:03.395561 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.395568 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:03.395574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:03.395631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:03.425673 1176706 cri.go:89] found id: ""
	I1217 00:59:03.425687 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.425694 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:03.425699 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:03.425758 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:03.452761 1176706 cri.go:89] found id: ""
	I1217 00:59:03.452775 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.452782 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:03.452790 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:03.452813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:03.470985 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:03.471004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:03.539585 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:03.539606 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:03.539617 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:03.608766 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:03.608787 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.641472 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:03.641487 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.214627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:06.225029 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:06.225095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:06.262898 1176706 cri.go:89] found id: ""
	I1217 00:59:06.262912 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.262919 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:06.262924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:06.262979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:06.295811 1176706 cri.go:89] found id: ""
	I1217 00:59:06.295825 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.295832 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:06.295837 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:06.295900 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:06.325305 1176706 cri.go:89] found id: ""
	I1217 00:59:06.325319 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.325326 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:06.325331 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:06.325388 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:06.350976 1176706 cri.go:89] found id: ""
	I1217 00:59:06.350990 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.350997 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:06.351002 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:06.351061 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:06.381013 1176706 cri.go:89] found id: ""
	I1217 00:59:06.381027 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.381034 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:06.381040 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:06.381156 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:06.407543 1176706 cri.go:89] found id: ""
	I1217 00:59:06.407556 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.407564 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:06.407569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:06.407627 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:06.435419 1176706 cri.go:89] found id: ""
	I1217 00:59:06.435433 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.435440 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:06.435448 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:06.435460 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:06.472071 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:06.472098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.540915 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:06.540936 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:06.558800 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:06.558816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:06.626144 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:06.626156 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:06.626167 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.199032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:09.210273 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:09.210345 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:09.238454 1176706 cri.go:89] found id: ""
	I1217 00:59:09.238468 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.238475 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:09.238481 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:09.238539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:09.283355 1176706 cri.go:89] found id: ""
	I1217 00:59:09.283369 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.283377 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:09.283382 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:09.283452 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:09.322894 1176706 cri.go:89] found id: ""
	I1217 00:59:09.322909 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.322917 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:09.322924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:09.322983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:09.349261 1176706 cri.go:89] found id: ""
	I1217 00:59:09.349275 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.349282 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:09.349290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:09.349348 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:09.375365 1176706 cri.go:89] found id: ""
	I1217 00:59:09.375381 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.375390 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:09.375395 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:09.375458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:09.404751 1176706 cri.go:89] found id: ""
	I1217 00:59:09.404765 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.404773 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:09.404778 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:09.404840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:09.430184 1176706 cri.go:89] found id: ""
	I1217 00:59:09.430198 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.430206 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:09.430214 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:09.430224 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:09.496857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:09.496876 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:09.515406 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:09.515423 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:09.581087 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:09.581098 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:09.581109 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.650268 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:09.650288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.181362 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:12.192867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:12.192928 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:12.219737 1176706 cri.go:89] found id: ""
	I1217 00:59:12.219750 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.219757 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:12.219763 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:12.219821 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:12.245063 1176706 cri.go:89] found id: ""
	I1217 00:59:12.245084 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.245091 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:12.245097 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:12.245165 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:12.272131 1176706 cri.go:89] found id: ""
	I1217 00:59:12.272145 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.272152 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:12.272157 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:12.272216 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:12.303997 1176706 cri.go:89] found id: ""
	I1217 00:59:12.304011 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.304018 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:12.304024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:12.304085 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:12.333611 1176706 cri.go:89] found id: ""
	I1217 00:59:12.333624 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.333632 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:12.333637 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:12.333693 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:12.363773 1176706 cri.go:89] found id: ""
	I1217 00:59:12.363789 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.363797 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:12.363802 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:12.363863 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:12.389846 1176706 cri.go:89] found id: ""
	I1217 00:59:12.389861 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.389868 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:12.389875 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:12.389886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:12.407604 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:12.407621 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:12.473182 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:12.473192 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:12.473203 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:12.543348 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:12.543369 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.577767 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:12.577783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.146065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:15.160131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:15.160197 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:15.190612 1176706 cri.go:89] found id: ""
	I1217 00:59:15.190626 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.190634 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:15.190639 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:15.190699 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:15.218099 1176706 cri.go:89] found id: ""
	I1217 00:59:15.218113 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.218121 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:15.218126 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:15.218184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:15.248835 1176706 cri.go:89] found id: ""
	I1217 00:59:15.248848 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.248856 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:15.248861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:15.248918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:15.285228 1176706 cri.go:89] found id: ""
	I1217 00:59:15.285242 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.285250 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:15.285256 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:15.285342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:15.319666 1176706 cri.go:89] found id: ""
	I1217 00:59:15.319684 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.319692 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:15.319697 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:15.319762 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:15.349950 1176706 cri.go:89] found id: ""
	I1217 00:59:15.349964 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.349971 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:15.349985 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:15.350057 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:15.377523 1176706 cri.go:89] found id: ""
	I1217 00:59:15.377539 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.377546 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:15.377553 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:15.377563 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.444971 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:15.444997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:15.463350 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:15.463367 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:15.527808 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:15.527819 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:15.527829 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:15.596798 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:15.596819 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.130677 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:18.141262 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:18.141323 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:18.169116 1176706 cri.go:89] found id: ""
	I1217 00:59:18.169130 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.169138 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:18.169144 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:18.169213 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:18.196282 1176706 cri.go:89] found id: ""
	I1217 00:59:18.196296 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.196303 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:18.196308 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:18.196374 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:18.221983 1176706 cri.go:89] found id: ""
	I1217 00:59:18.222001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.222008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:18.222014 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:18.222104 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:18.253664 1176706 cri.go:89] found id: ""
	I1217 00:59:18.253678 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.253695 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:18.253701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:18.253759 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:18.285902 1176706 cri.go:89] found id: ""
	I1217 00:59:18.285926 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.285935 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:18.285940 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:18.286012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:18.314726 1176706 cri.go:89] found id: ""
	I1217 00:59:18.314740 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.314747 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:18.314762 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:18.314817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:18.344853 1176706 cri.go:89] found id: ""
	I1217 00:59:18.344867 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.344875 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:18.344882 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:18.344904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:18.414538 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:18.414559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.447095 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:18.447111 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:18.512991 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:18.513011 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:18.533994 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:18.534020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:18.598850 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.100519 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:21.110642 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:21.110704 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:21.135662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.135677 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.135684 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:21.135690 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:21.135749 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:21.165495 1176706 cri.go:89] found id: ""
	I1217 00:59:21.165508 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.165515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:21.165522 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:21.165581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:21.190194 1176706 cri.go:89] found id: ""
	I1217 00:59:21.190216 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.190224 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:21.190229 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:21.190286 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:21.215635 1176706 cri.go:89] found id: ""
	I1217 00:59:21.215658 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.215668 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:21.215674 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:21.215741 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:21.240901 1176706 cri.go:89] found id: ""
	I1217 00:59:21.240915 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.240922 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:21.240928 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:21.240985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:21.282662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.282676 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.282683 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:21.282689 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:21.282747 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:21.318909 1176706 cri.go:89] found id: ""
	I1217 00:59:21.318937 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.318946 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:21.318955 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:21.318981 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:21.389438 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:21.389459 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:21.407933 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:21.407951 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:21.470948 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.470958 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:21.470970 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:21.543202 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:21.543223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:24.074213 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:24.084903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:24.084967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:24.111192 1176706 cri.go:89] found id: ""
	I1217 00:59:24.111207 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.111214 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:24.111221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:24.111280 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:24.137550 1176706 cri.go:89] found id: ""
	I1217 00:59:24.137564 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.137572 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:24.137577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:24.137638 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:24.163576 1176706 cri.go:89] found id: ""
	I1217 00:59:24.163590 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.163598 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:24.163603 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:24.163661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:24.191365 1176706 cri.go:89] found id: ""
	I1217 00:59:24.191379 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.191386 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:24.191391 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:24.191451 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:24.218021 1176706 cri.go:89] found id: ""
	I1217 00:59:24.218036 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.218043 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:24.218048 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:24.218109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:24.243066 1176706 cri.go:89] found id: ""
	I1217 00:59:24.243079 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.243086 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:24.243092 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:24.243150 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:24.273424 1176706 cri.go:89] found id: ""
	I1217 00:59:24.273438 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.273446 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:24.273453 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:24.273468 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:24.352524 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:24.352545 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:24.370425 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:24.370445 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:24.435871 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:24.435881 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:24.435896 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:24.504929 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:24.504949 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.033266 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:27.043460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:27.043521 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:27.068665 1176706 cri.go:89] found id: ""
	I1217 00:59:27.068679 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.068686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:27.068698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:27.068754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:27.094007 1176706 cri.go:89] found id: ""
	I1217 00:59:27.094021 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.094028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:27.094033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:27.094092 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:27.118910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.118923 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.118931 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:27.118936 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:27.118994 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:27.147303 1176706 cri.go:89] found id: ""
	I1217 00:59:27.147317 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.147324 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:27.147330 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:27.147386 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:27.172343 1176706 cri.go:89] found id: ""
	I1217 00:59:27.172357 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.172365 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:27.172370 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:27.172458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:27.197910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.197924 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.197932 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:27.197938 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:27.198001 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:27.227576 1176706 cri.go:89] found id: ""
	I1217 00:59:27.227591 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.227598 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:27.227606 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:27.227618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:27.311005 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:27.311016 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:27.311026 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:27.382732 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:27.382752 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.415820 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:27.415836 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:27.482903 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:27.482926 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.004621 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:30.030664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:30.030745 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:30.081469 1176706 cri.go:89] found id: ""
	I1217 00:59:30.081485 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.081493 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:30.081499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:30.081566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:30.113916 1176706 cri.go:89] found id: ""
	I1217 00:59:30.113931 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.113939 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:30.113946 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:30.114011 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:30.145424 1176706 cri.go:89] found id: ""
	I1217 00:59:30.145439 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.145447 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:30.145453 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:30.145519 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:30.172979 1176706 cri.go:89] found id: ""
	I1217 00:59:30.172993 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.173000 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:30.173006 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:30.173068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:30.203666 1176706 cri.go:89] found id: ""
	I1217 00:59:30.203680 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.203688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:30.203693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:30.203754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:30.230252 1176706 cri.go:89] found id: ""
	I1217 00:59:30.230266 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.230274 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:30.230280 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:30.230346 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:30.263265 1176706 cri.go:89] found id: ""
	I1217 00:59:30.263288 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.263297 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:30.263305 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:30.263317 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.285817 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:30.285833 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:30.357587 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:30.357597 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:30.357609 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:30.426496 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:30.426518 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:30.455371 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:30.455387 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:33.025588 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:33.037063 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:33.037133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:33.066495 1176706 cri.go:89] found id: ""
	I1217 00:59:33.066510 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.066518 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:33.066531 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:33.066593 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:33.094203 1176706 cri.go:89] found id: ""
	I1217 00:59:33.094218 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.094225 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:33.094230 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:33.094289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:33.121048 1176706 cri.go:89] found id: ""
	I1217 00:59:33.121062 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.121070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:33.121076 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:33.121137 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:33.148530 1176706 cri.go:89] found id: ""
	I1217 00:59:33.148559 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.148568 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:33.148574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:33.148647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:33.175802 1176706 cri.go:89] found id: ""
	I1217 00:59:33.175816 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.175823 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:33.175829 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:33.175892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:33.206535 1176706 cri.go:89] found id: ""
	I1217 00:59:33.206548 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.206556 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:33.206562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:33.206623 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:33.236039 1176706 cri.go:89] found id: ""
	I1217 00:59:33.236052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.236060 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:33.236068 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:33.236078 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:33.255180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:33.255197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:33.339098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:33.339108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:33.339121 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:33.412971 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:33.412997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:33.441676 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:33.441694 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.008647 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:36.020237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:36.020301 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:36.052600 1176706 cri.go:89] found id: ""
	I1217 00:59:36.052616 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.052623 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:36.052629 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:36.052692 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:36.081744 1176706 cri.go:89] found id: ""
	I1217 00:59:36.081759 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.081768 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:36.081773 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:36.081841 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:36.109987 1176706 cri.go:89] found id: ""
	I1217 00:59:36.110001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.110008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:36.110013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:36.110077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:36.134954 1176706 cri.go:89] found id: ""
	I1217 00:59:36.134967 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.134975 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:36.134980 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:36.135037 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:36.159862 1176706 cri.go:89] found id: ""
	I1217 00:59:36.159876 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.159884 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:36.159889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:36.159947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:36.187809 1176706 cri.go:89] found id: ""
	I1217 00:59:36.187822 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.187829 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:36.187835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:36.187904 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:36.214242 1176706 cri.go:89] found id: ""
	I1217 00:59:36.214257 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.214264 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:36.214272 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:36.214283 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.286225 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:36.286244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:36.305628 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:36.305646 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:36.371158 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:36.371170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:36.371181 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:36.439045 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:36.439065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:38.969106 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:38.979363 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:38.979424 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:39.007795 1176706 cri.go:89] found id: ""
	I1217 00:59:39.007810 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.007818 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:39.007824 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:39.007888 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:39.034152 1176706 cri.go:89] found id: ""
	I1217 00:59:39.034166 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.034173 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:39.034179 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:39.034238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:39.059914 1176706 cri.go:89] found id: ""
	I1217 00:59:39.059928 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.059935 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:39.059941 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:39.060002 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:39.085320 1176706 cri.go:89] found id: ""
	I1217 00:59:39.085334 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.085341 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:39.085349 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:39.085405 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:39.110285 1176706 cri.go:89] found id: ""
	I1217 00:59:39.110298 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.110306 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:39.110311 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:39.110372 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:39.135035 1176706 cri.go:89] found id: ""
	I1217 00:59:39.135058 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.135066 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:39.135072 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:39.135139 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:39.159817 1176706 cri.go:89] found id: ""
	I1217 00:59:39.159830 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.159848 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:39.159857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:39.159872 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:39.177791 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:39.177809 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:39.249533 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:39.249543 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:39.249552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:39.325557 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:39.325577 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:39.360066 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:39.360085 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:41.928404 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:41.938632 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:41.938696 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:41.964031 1176706 cri.go:89] found id: ""
	I1217 00:59:41.964052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.964059 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:41.964064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:41.964122 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:41.993062 1176706 cri.go:89] found id: ""
	I1217 00:59:41.993076 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.993084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:41.993089 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:41.993160 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:42.033652 1176706 cri.go:89] found id: ""
	I1217 00:59:42.033667 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.033676 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:42.033681 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:42.033746 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:42.060629 1176706 cri.go:89] found id: ""
	I1217 00:59:42.060645 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.060653 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:42.060659 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:42.060722 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:42.092817 1176706 cri.go:89] found id: ""
	I1217 00:59:42.092845 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.092853 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:42.092868 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:42.092941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:42.136486 1176706 cri.go:89] found id: ""
	I1217 00:59:42.136506 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.136515 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:42.136521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:42.136592 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:42.171937 1176706 cri.go:89] found id: ""
	I1217 00:59:42.171952 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.171959 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:42.171967 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:42.171979 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:42.262695 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:42.262707 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:42.262718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:42.339199 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:42.339220 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:42.372997 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:42.373025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:42.446036 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:42.446055 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:44.965013 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:44.976094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:44.976161 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:45.015164 1176706 cri.go:89] found id: ""
	I1217 00:59:45.015181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.015189 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:45.015195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:45.015272 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:45.071613 1176706 cri.go:89] found id: ""
	I1217 00:59:45.071635 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.071643 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:45.071649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:45.071715 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:45.119793 1176706 cri.go:89] found id: ""
	I1217 00:59:45.119818 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.119826 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:45.119839 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:45.119914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:45.151783 1176706 cri.go:89] found id: ""
	I1217 00:59:45.151800 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.151808 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:45.151814 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:45.151892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:45.215691 1176706 cri.go:89] found id: ""
	I1217 00:59:45.215708 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.215717 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:45.215723 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:45.215788 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:45.307588 1176706 cri.go:89] found id: ""
	I1217 00:59:45.307603 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.307612 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:45.307617 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:45.307686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:45.338241 1176706 cri.go:89] found id: ""
	I1217 00:59:45.338255 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.338262 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:45.338270 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:45.338281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:45.369988 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:45.370005 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:45.441693 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:45.441715 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:45.461548 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:45.461567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:45.548353 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:45.548363 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:45.548374 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.120029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:48.130460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:48.130527 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:48.158049 1176706 cri.go:89] found id: ""
	I1217 00:59:48.158063 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.158070 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:48.158075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:48.158133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:48.183768 1176706 cri.go:89] found id: ""
	I1217 00:59:48.183782 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.183790 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:48.183795 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:48.183853 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:48.209858 1176706 cri.go:89] found id: ""
	I1217 00:59:48.209883 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.209891 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:48.209897 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:48.209969 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:48.239433 1176706 cri.go:89] found id: ""
	I1217 00:59:48.239447 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.239464 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:48.239470 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:48.239546 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:48.283289 1176706 cri.go:89] found id: ""
	I1217 00:59:48.283312 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.283320 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:48.283325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:48.283401 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:48.314402 1176706 cri.go:89] found id: ""
	I1217 00:59:48.314429 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.314437 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:48.314443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:48.314511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:48.343692 1176706 cri.go:89] found id: ""
	I1217 00:59:48.343706 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.343727 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:48.343735 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:48.343745 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:48.362542 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:48.362560 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:48.427994 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:48.428004 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:48.428016 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.499539 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:48.499559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:48.531009 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:48.531025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.098220 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:51.109265 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:51.109331 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:51.136197 1176706 cri.go:89] found id: ""
	I1217 00:59:51.136213 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.136221 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:51.136227 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:51.136287 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:51.163078 1176706 cri.go:89] found id: ""
	I1217 00:59:51.163092 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.163100 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:51.163105 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:51.163172 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:51.191839 1176706 cri.go:89] found id: ""
	I1217 00:59:51.191853 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.191861 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:51.191866 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:51.191949 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:51.218098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.218116 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.218124 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:51.218130 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:51.218211 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:51.243098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.243112 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.243120 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:51.243125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:51.243191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:51.271566 1176706 cri.go:89] found id: ""
	I1217 00:59:51.271579 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.271586 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:51.271591 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:51.271647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:51.305158 1176706 cri.go:89] found id: ""
	I1217 00:59:51.305181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.305187 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:51.305196 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:51.305207 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.376352 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:51.376373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:51.394410 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:51.394427 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:51.459231 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:51.459240 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:51.459251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:51.528231 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:51.528252 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:54.058312 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:54.069031 1176706 kubeadm.go:602] duration metric: took 4m2.785263609s to restartPrimaryControlPlane
	W1217 00:59:54.069095 1176706 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:59:54.069181 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 00:59:54.486154 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:59:54.499356 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:59:54.507725 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:59:54.507779 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:59:54.515997 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:59:54.516007 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 00:59:54.516064 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:59:54.524157 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:59:54.524213 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:59:54.532265 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:59:54.540638 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:59:54.540707 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:59:54.548269 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.556326 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:59:54.556388 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.564545 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:59:54.572682 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:59:54.572738 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:59:54.580611 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:59:54.700281 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:59:54.700747 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:59:54.763643 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:03:56.152758 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:03:56.152795 1176706 kubeadm.go:319] 
	I1217 01:03:56.152869 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:03:56.156728 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.156797 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.156958 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.157014 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.157073 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.157118 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.157197 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.157253 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.157300 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.157352 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.157400 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.157453 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.157508 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.157553 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.157624 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.157727 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.157824 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.157884 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.160971 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.161055 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.161118 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.161193 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.161252 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.161327 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.161379 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.161441 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.161501 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.161574 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.161645 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.161681 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.161741 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:56.161790 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:56.161845 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:56.161896 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:56.161957 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:56.162010 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:56.162092 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:56.162157 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:56.165021 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:56.165147 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:56.165231 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:56.165300 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:56.165418 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:56.165512 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:56.165614 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:56.165696 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:56.165733 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:56.165861 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:56.165963 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:03:56.166026 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240228s
	I1217 01:03:56.166028 1176706 kubeadm.go:319] 
	I1217 01:03:56.166083 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:03:56.166114 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:03:56.166215 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:03:56.166218 1176706 kubeadm.go:319] 
	I1217 01:03:56.166320 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:03:56.166351 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:03:56.166380 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:03:56.166487 1176706 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240228s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:03:56.166580 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 01:03:56.166903 1176706 kubeadm.go:319] 
	I1217 01:03:56.586040 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:03:56.599481 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:03:56.599536 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:03:56.607687 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:03:56.607697 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 01:03:56.607750 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:03:56.615588 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:03:56.615644 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:03:56.623820 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:03:56.631817 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:03:56.631875 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:03:56.639771 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.647723 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:03:56.647784 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.655274 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:03:56.662953 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:03:56.663009 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:03:56.671031 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:03:56.709331 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.709382 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.784528 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.784593 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.784627 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.784671 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.784718 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.784764 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.784811 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.784857 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.784907 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.784950 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.784997 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.785046 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.852730 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.852846 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.852941 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.864882 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.870169 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.870260 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.870331 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.870414 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.870480 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.870560 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.870623 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.870698 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.870772 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.870857 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.870939 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.870985 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.871053 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:57.081118 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:57.308024 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:57.795688 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:58.747783 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:59.056308 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:59.056908 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:59.061460 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:59.064667 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:59.064766 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:59.064843 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:59.064909 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:59.079437 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:59.079539 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:59.087425 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:59.087990 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:59.088228 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:59.232706 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:59.232823 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:07:59.232882 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288911s
	I1217 01:07:59.232905 1176706 kubeadm.go:319] 
	I1217 01:07:59.232961 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:07:59.232994 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:07:59.233119 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:07:59.233124 1176706 kubeadm.go:319] 
	I1217 01:07:59.233227 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:07:59.233261 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:07:59.233291 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:07:59.233294 1176706 kubeadm.go:319] 
	I1217 01:07:59.237945 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:07:59.238359 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:07:59.238466 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:07:59.238699 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:07:59.238704 1176706 kubeadm.go:319] 
	I1217 01:07:59.238771 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:07:59.238833 1176706 kubeadm.go:403] duration metric: took 12m7.995613678s to StartCluster
	I1217 01:07:59.238862 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:07:59.238924 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:07:59.265092 1176706 cri.go:89] found id: ""
	I1217 01:07:59.265110 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.265118 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:07:59.265124 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:07:59.265190 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:07:59.289869 1176706 cri.go:89] found id: ""
	I1217 01:07:59.289884 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.289891 1176706 logs.go:284] No container was found matching "etcd"
	I1217 01:07:59.289896 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:07:59.289954 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:07:59.315177 1176706 cri.go:89] found id: ""
	I1217 01:07:59.315192 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.315200 1176706 logs.go:284] No container was found matching "coredns"
	I1217 01:07:59.315206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:07:59.315267 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:07:59.343402 1176706 cri.go:89] found id: ""
	I1217 01:07:59.343422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.343429 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:07:59.343435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:07:59.343492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:07:59.369351 1176706 cri.go:89] found id: ""
	I1217 01:07:59.369367 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.369375 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:07:59.369381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:07:59.369446 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:07:59.395407 1176706 cri.go:89] found id: ""
	I1217 01:07:59.395422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.395430 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:07:59.395436 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:07:59.395497 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:07:59.425527 1176706 cri.go:89] found id: ""
	I1217 01:07:59.425542 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.425549 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 01:07:59.425557 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:07:59.425567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:07:59.496396 1176706 logs.go:123] Gathering logs for container status ...
	I1217 01:07:59.496422 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:07:59.529365 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 01:07:59.529381 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:07:59.607059 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 01:07:59.607079 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:07:59.625460 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:07:59.625476 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:07:59.694111 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 01:07:59.694128 1176706 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:07:59.694160 1176706 out.go:285] * 
	W1217 01:07:59.696578 1176706 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.696718 1176706 out.go:285] * 
	W1217 01:07:59.699147 1176706 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:07:59.705064 1176706 out.go:203] 
	W1217 01:07:59.708024 1176706 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.708074 1176706 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:07:59.708093 1176706 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:07:59.711386 1176706 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.077706792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.077955098Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078065315Z" level=info msg="Create NRI interface"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078221668Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.07823903Z" level=info msg="runtime interface created"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078253274Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078274861Z" level=info msg="runtime interface starting up..."
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078281187Z" level=info msg="starting plugins..."
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078309913Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078394662Z" level=info msg="No systemd watchdog enabled"
	Dec 17 00:55:50 functional-389537 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.769515298Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7738c2ec-23fd-41c2-bf87-2793f023edcc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.770662432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=dfe8a792-5dcc-4fb8-9e7c-61d12e13480c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.771173887Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5e41de14-6ab5-4bd0-8b1f-d1aaa926d052 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.771613762Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ef9a5b6b-ddfd-4451-b4b5-65c9f96efdc9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.77201523Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f7106076-8cd9-43cb-b7d6-b0df492103a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.772558315Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a5725d73-e041-4ebd-99d9-bf135606222b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.772975922Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1c1829d4-725f-476c-b5d6-fe07b75b9254 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.856470274Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ef116d89-326a-4264-be1a-c1a1c61f856f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.85716241Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=48ae23b1-9237-4abe-8586-a22789c1855d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.857752633Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=3cdbc308-65b6-45fa-9f9e-f10e79119ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858320825Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3d72515c-27e8-4599-9a3a-55c1e786e2d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858852571Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df55df6f-24f3-440d-9630-435b19250644 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859434761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=76977bf3-dbf1-4740-ab7e-261b44d6cbc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859913322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3a88b64b-7c2e-4efa-a683-a7222714b1da name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:08:01.163100   21358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:01.163899   21358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:01.165642   21358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:01.166224   21358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:01.167864   21358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:08:01 up  6:50,  0 user,  load average: 0.53, 0.28, 0.47
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:07:58 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:07:58 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2119.
	Dec 17 01:07:58 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:58 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:58 functional-389537 kubelet[21163]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:07:58 functional-389537 kubelet[21163]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:07:58 functional-389537 kubelet[21163]: E1217 01:07:58.800572   21163 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:07:58 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:07:58 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:07:59 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2120.
	Dec 17 01:07:59 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:59 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:07:59 functional-389537 kubelet[21230]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:07:59 functional-389537 kubelet[21230]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:07:59 functional-389537 kubelet[21230]: E1217 01:07:59.565176   21230 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:07:59 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:07:59 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:00 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2121.
	Dec 17 01:08:00 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:00 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:00 functional-389537 kubelet[21273]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:00 functional-389537 kubelet[21273]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:00 functional-389537 kubelet[21273]: E1217 01:08:00.409492   21273 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:00 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:00 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (443.264489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-389537 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-389537 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (62.335257ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-389537 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (295.119385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-099267 ssh pgrep buildkitd                                                                                                             │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │                     │
	│ image   │ functional-099267 image ls --format yaml --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr                                            │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format json --alsologtostderr                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls --format table --alsologtostderr                                                                                       │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ image   │ functional-099267 image ls                                                                                                                        │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:40 UTC │
	│ delete  │ -p functional-099267                                                                                                                              │ functional-099267 │ jenkins │ v1.37.0 │ 17 Dec 25 00:40 UTC │ 17 Dec 25 00:41 UTC │
	│ start   │ -p functional-389537 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	│ start   │ -p functional-389537 --alsologtostderr -v=8                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:49 UTC │                     │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add registry.k8s.io/pause:latest                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache add minikube-local-cache-test:functional-389537                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ functional-389537 cache delete minikube-local-cache-test:functional-389537                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl images                                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cache   │ functional-389537 cache reload                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh     │ functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ kubectl │ functional-389537 kubectl -- --context functional-389537 get pods                                                                                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ start   │ -p functional-389537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:55:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:55:46.994785 1176706 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:55:46.994905 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.994909 1176706 out.go:374] Setting ErrFile to fd 2...
	I1217 00:55:46.994912 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.995145 1176706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:55:46.995485 1176706 out.go:368] Setting JSON to false
	I1217 00:55:46.996300 1176706 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23897,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:55:46.996353 1176706 start.go:143] virtualization:  
	I1217 00:55:46.999868 1176706 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:55:47.003126 1176706 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:55:47.003469 1176706 notify.go:221] Checking for updates...
	I1217 00:55:47.009985 1176706 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:55:47.012797 1176706 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:55:47.015597 1176706 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:55:47.018366 1176706 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:55:47.021294 1176706 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:55:47.024608 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:47.024710 1176706 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:55:47.058976 1176706 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:55:47.059096 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.117622 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.107831529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.117708 1176706 docker.go:319] overlay module found
	I1217 00:55:47.120741 1176706 out.go:179] * Using the docker driver based on existing profile
	I1217 00:55:47.123563 1176706 start.go:309] selected driver: docker
	I1217 00:55:47.123570 1176706 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.123673 1176706 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:55:47.123773 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.174997 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.166206706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.175382 1176706 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:55:47.175411 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:47.175464 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:47.175503 1176706 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.182544 1176706 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:55:47.185443 1176706 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:55:47.188263 1176706 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:55:47.191087 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:47.191140 1176706 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:55:47.191147 1176706 cache.go:65] Caching tarball of preloaded images
	I1217 00:55:47.191162 1176706 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:55:47.191229 1176706 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:55:47.191238 1176706 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:55:47.191343 1176706 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:55:47.210444 1176706 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:55:47.210456 1176706 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:55:47.210476 1176706 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:55:47.210509 1176706 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:55:47.210571 1176706 start.go:364] duration metric: took 45.496µs to acquireMachinesLock for "functional-389537"
	I1217 00:55:47.210589 1176706 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:55:47.210598 1176706 fix.go:54] fixHost starting: 
	I1217 00:55:47.210865 1176706 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:55:47.227344 1176706 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:55:47.227372 1176706 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:55:47.230529 1176706 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:55:47.230551 1176706 machine.go:94] provisionDockerMachine start ...
	I1217 00:55:47.230646 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.247199 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.247509 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.247515 1176706 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:55:47.376058 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.376078 1176706 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:55:47.376140 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.394017 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.394338 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.394346 1176706 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:55:47.541042 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.541113 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.567770 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.568067 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.568081 1176706 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:55:47.696783 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:55:47.696798 1176706 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:55:47.696826 1176706 ubuntu.go:190] setting up certificates
	I1217 00:55:47.696844 1176706 provision.go:84] configureAuth start
	I1217 00:55:47.696911 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:47.715433 1176706 provision.go:143] copyHostCerts
	I1217 00:55:47.715503 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:55:47.715510 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:55:47.715589 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:55:47.715698 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:55:47.715703 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:55:47.715729 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:55:47.715793 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:55:47.715796 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:55:47.715819 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:55:47.715916 1176706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:55:47.936144 1176706 provision.go:177] copyRemoteCerts
	I1217 00:55:47.936198 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:55:47.936245 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.956022 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.053167 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:55:48.072266 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:55:48.091659 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:55:48.111240 1176706 provision.go:87] duration metric: took 414.372164ms to configureAuth
	I1217 00:55:48.111259 1176706 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:55:48.111463 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:48.111573 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.130165 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:48.130471 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:48.130482 1176706 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:55:48.471522 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:55:48.471533 1176706 machine.go:97] duration metric: took 1.240975938s to provisionDockerMachine
	I1217 00:55:48.471544 1176706 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:55:48.471555 1176706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:55:48.471613 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:55:48.471661 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.490121 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.584735 1176706 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:55:48.588097 1176706 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:55:48.588115 1176706 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:55:48.588125 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:55:48.588181 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:55:48.588263 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:55:48.588334 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:55:48.588376 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:55:48.596032 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:48.613682 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:55:48.631217 1176706 start.go:296] duration metric: took 159.660022ms for postStartSetup
	I1217 00:55:48.631287 1176706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:55:48.631323 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.648559 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.741603 1176706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:55:48.746366 1176706 fix.go:56] duration metric: took 1.535755013s for fixHost
	I1217 00:55:48.746384 1176706 start.go:83] releasing machines lock for "functional-389537", held for 1.535804694s
	I1217 00:55:48.746455 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:48.763224 1176706 ssh_runner.go:195] Run: cat /version.json
	I1217 00:55:48.763430 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.763750 1176706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:55:48.763808 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.786426 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.786940 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.880624 1176706 ssh_runner.go:195] Run: systemctl --version
	I1217 00:55:48.974663 1176706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:55:49.027409 1176706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:55:49.032432 1176706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:55:49.032491 1176706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:55:49.041183 1176706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:55:49.041196 1176706 start.go:496] detecting cgroup driver to use...
	I1217 00:55:49.041228 1176706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:55:49.041278 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:55:49.058264 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:55:49.077295 1176706 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:55:49.077360 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:55:49.093971 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:55:49.107900 1176706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:55:49.227935 1176706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:55:49.348723 1176706 docker.go:234] disabling docker service ...
	I1217 00:55:49.348791 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:55:49.364370 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:55:49.377769 1176706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:55:49.508111 1176706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:55:49.633558 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:55:49.646587 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:55:49.660861 1176706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:55:49.660916 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.670006 1176706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:55:49.670064 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.678812 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.687975 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.697006 1176706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:55:49.705500 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.714719 1176706 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.723320 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.732206 1176706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:55:49.740020 1176706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:55:49.747555 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:49.895105 1176706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:55:50.085156 1176706 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:55:50.085220 1176706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:55:50.089378 1176706 start.go:564] Will wait 60s for crictl version
	I1217 00:55:50.089440 1176706 ssh_runner.go:195] Run: which crictl
	I1217 00:55:50.093400 1176706 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:55:50.123005 1176706 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:55:50.123090 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.155928 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.190668 1176706 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:55:50.193712 1176706 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:55:50.210245 1176706 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:55:50.217339 1176706 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:55:50.220306 1176706 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:55:50.220479 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:50.220549 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.261117 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.261129 1176706 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:55:50.261188 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.288200 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.288211 1176706 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:55:50.288217 1176706 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:55:50.288323 1176706 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:55:50.288468 1176706 ssh_runner.go:195] Run: crio config
	I1217 00:55:50.348160 1176706 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:55:50.348190 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:50.348199 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:50.348212 1176706 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:55:50.348234 1176706 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:55:50.348361 1176706 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:55:50.348453 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:55:50.356478 1176706 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:55:50.356555 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:55:50.364296 1176706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:55:50.378459 1176706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:55:50.391769 1176706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1217 00:55:50.404843 1176706 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:55:50.408803 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:50.530281 1176706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:55:50.553453 1176706 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:55:50.553463 1176706 certs.go:195] generating shared ca certs ...
	I1217 00:55:50.553477 1176706 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:55:50.553609 1176706 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:55:50.553660 1176706 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:55:50.553666 1176706 certs.go:257] generating profile certs ...
	I1217 00:55:50.553779 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:55:50.553831 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:55:50.553877 1176706 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:55:50.553979 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:55:50.554006 1176706 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:55:50.554013 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:55:50.554039 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:55:50.554060 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:55:50.554085 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:55:50.554129 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:50.555361 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:55:50.582492 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:55:50.603683 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:55:50.621384 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:55:50.639056 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:55:50.656396 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:55:50.673796 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:55:50.690805 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:55:50.708128 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:55:50.726044 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:55:50.743273 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:55:50.763262 1176706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:55:50.777113 1176706 ssh_runner.go:195] Run: openssl version
	I1217 00:55:50.783340 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.791319 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:55:50.799039 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802914 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802970 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.844145 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:55:50.851746 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.859382 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:55:50.866837 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870628 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870686 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.912088 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:55:50.919506 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.926804 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:55:50.934239 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938447 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938514 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.979317 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:55:50.986668 1176706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:55:50.990400 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:55:51.033890 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:55:51.074982 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:55:51.116748 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:55:51.160579 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:55:51.202188 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:55:51.243239 1176706 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:51.243328 1176706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:55:51.243394 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.274971 1176706 cri.go:89] found id: ""
	I1217 00:55:51.275034 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:55:51.283750 1176706 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:55:51.283758 1176706 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:55:51.283810 1176706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:55:51.291948 1176706 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.292487 1176706 kubeconfig.go:125] found "functional-389537" server: "https://192.168.49.2:8441"
	I1217 00:55:51.293778 1176706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:55:51.304922 1176706 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:41:14.220606710 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:55:50.397867980 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:55:51.304944 1176706 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:55:51.304956 1176706 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 00:55:51.305024 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.335594 1176706 cri.go:89] found id: ""
	I1217 00:55:51.335654 1176706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:55:51.349252 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:55:51.357284 1176706 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:45 /etc/kubernetes/scheduler.conf
	
	I1217 00:55:51.357346 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:55:51.365155 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:55:51.373122 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.373177 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:55:51.380532 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.387880 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.387941 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.395488 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:55:51.402971 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.403027 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:55:51.410207 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:55:51.417914 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:51.465120 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.243254 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.461995 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.527345 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.573822 1176706 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:55:52.573908 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.074814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.574907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.075012 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.575023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.574684 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.074609 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.574663 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.074765 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.574635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.074907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.574627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.074088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.574795 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.097233 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.574961 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.074054 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.574065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.075050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.574031 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.075006 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.574216 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.074748 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.573974 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.074753 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.574034 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.075017 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.574061 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.074905 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.574698 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.074763 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.574614 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.074085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.574076 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.074847 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.574675 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.074172 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.574715 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.074369 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.574662 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.074071 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.575002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.074917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.574153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.074723 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.574433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.074632 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.574760 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.074421 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.574365 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.074110 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.574084 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.074083 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.574229 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.075007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.574915 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.074637 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.574418 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.074231 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.574859 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.074383 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.574046 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.074153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.574749 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.074247 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.574077 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.074002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.574149 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.074309 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.574050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.074975 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.574187 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.074918 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.574916 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.074771 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.574779 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.074798 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.573985 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.074834 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.574776 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.074670 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.574866 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.074740 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.574090 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.074115 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.574007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.074661 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.574687 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.074553 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.574236 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.074239 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.574036 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.074932 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.574096 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.074026 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.574255 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.074880 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.574038 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.073993 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.574088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.574323 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.074338 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.574154 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.074792 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.574063 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.074852 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.574810 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.074586 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.574043 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.075023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.574226 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.074137 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.585259 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.074119 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.573988 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.074068 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.575029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.074819 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.574056 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:52.574153 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:52.600363 1176706 cri.go:89] found id: ""
	I1217 00:56:52.600377 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.600384 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:52.600390 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:52.600466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:52.625666 1176706 cri.go:89] found id: ""
	I1217 00:56:52.625679 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.625686 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:52.625692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:52.625750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:52.651207 1176706 cri.go:89] found id: ""
	I1217 00:56:52.651220 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.651228 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:52.651233 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:52.651289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:52.675877 1176706 cri.go:89] found id: ""
	I1217 00:56:52.675891 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.675898 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:52.675904 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:52.675968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:52.705638 1176706 cri.go:89] found id: ""
	I1217 00:56:52.705651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.705658 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:52.705663 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:52.705733 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:52.734795 1176706 cri.go:89] found id: ""
	I1217 00:56:52.734809 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.734816 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:52.734821 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:52.734882 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:52.765098 1176706 cri.go:89] found id: ""
	I1217 00:56:52.765112 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.765119 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:52.765127 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:52.765138 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:52.797741 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:52.797759 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:52.872988 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:52.873007 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:52.891536 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:52.891552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:52.956983 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:52.956994 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:52.957004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.530194 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:55.540066 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:55.540129 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:55.566494 1176706 cri.go:89] found id: ""
	I1217 00:56:55.566509 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.566516 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:55.566521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:55.566579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:55.599453 1176706 cri.go:89] found id: ""
	I1217 00:56:55.599467 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.599474 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:55.599479 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:55.599539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:55.624628 1176706 cri.go:89] found id: ""
	I1217 00:56:55.624651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.624659 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:55.624664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:55.624720 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:55.650853 1176706 cri.go:89] found id: ""
	I1217 00:56:55.650867 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.650874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:55.650879 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:55.650947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:55.676274 1176706 cri.go:89] found id: ""
	I1217 00:56:55.676287 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.676295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:55.676302 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:55.676363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:55.705470 1176706 cri.go:89] found id: ""
	I1217 00:56:55.705484 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.705491 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:55.705497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:55.705577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:55.729482 1176706 cri.go:89] found id: ""
	I1217 00:56:55.729495 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.729502 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:55.729510 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:55.729520 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:55.797202 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:55.797223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:55.816424 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:55.816452 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:55.887945 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:55.887971 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:55.887984 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.962011 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:55.962032 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:58.492176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:58.503876 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:58.503952 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:58.530086 1176706 cri.go:89] found id: ""
	I1217 00:56:58.530101 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.530108 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:58.530114 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:58.530175 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:58.556063 1176706 cri.go:89] found id: ""
	I1217 00:56:58.556077 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.556084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:58.556090 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:58.556148 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:58.582188 1176706 cri.go:89] found id: ""
	I1217 00:56:58.582202 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.582209 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:58.582215 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:58.582295 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:58.607569 1176706 cri.go:89] found id: ""
	I1217 00:56:58.607583 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.607590 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:58.607595 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:58.607652 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:58.634350 1176706 cri.go:89] found id: ""
	I1217 00:56:58.634364 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.634371 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:58.634378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:58.634445 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:58.664026 1176706 cri.go:89] found id: ""
	I1217 00:56:58.664040 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.664048 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:58.664053 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:58.664114 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:58.689017 1176706 cri.go:89] found id: ""
	I1217 00:56:58.689030 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.689037 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:58.689050 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:58.689060 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:58.754795 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:58.754815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:58.775189 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:58.775206 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:58.849221 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:58.849231 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:58.849243 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:58.922086 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:58.922107 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.451030 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:01.460964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:01.461034 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:01.489661 1176706 cri.go:89] found id: ""
	I1217 00:57:01.489685 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.489693 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:01.489698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:01.489767 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:01.515445 1176706 cri.go:89] found id: ""
	I1217 00:57:01.515468 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.515476 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:01.515482 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:01.515549 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:01.540532 1176706 cri.go:89] found id: ""
	I1217 00:57:01.540546 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.540554 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:01.540560 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:01.540629 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:01.569650 1176706 cri.go:89] found id: ""
	I1217 00:57:01.569664 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.569671 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:01.569676 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:01.569738 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:01.596059 1176706 cri.go:89] found id: ""
	I1217 00:57:01.596072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.596080 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:01.596085 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:01.596140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:01.621197 1176706 cri.go:89] found id: ""
	I1217 00:57:01.621211 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.621218 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:01.621224 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:01.621282 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:01.650001 1176706 cri.go:89] found id: ""
	I1217 00:57:01.650014 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.650022 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:01.650029 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:01.650040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:01.667789 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:01.667805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:01.730637 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:01.730688 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:01.730705 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:01.804764 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:01.804783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.853135 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:01.853152 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.422102 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:04.432445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:04.432511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:04.456733 1176706 cri.go:89] found id: ""
	I1217 00:57:04.456747 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.456754 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:04.456760 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:04.456817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:04.481576 1176706 cri.go:89] found id: ""
	I1217 00:57:04.481591 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.481599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:04.481604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:04.481663 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:04.511390 1176706 cri.go:89] found id: ""
	I1217 00:57:04.511405 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.511412 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:04.511417 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:04.511481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:04.539584 1176706 cri.go:89] found id: ""
	I1217 00:57:04.539608 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.539615 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:04.539621 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:04.539686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:04.564039 1176706 cri.go:89] found id: ""
	I1217 00:57:04.564054 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.564061 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:04.564067 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:04.564126 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:04.588270 1176706 cri.go:89] found id: ""
	I1217 00:57:04.588283 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.588291 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:04.588296 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:04.588352 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:04.615420 1176706 cri.go:89] found id: ""
	I1217 00:57:04.615435 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.615442 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:04.615450 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:04.615461 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:04.648626 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:04.648647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.714893 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:04.714913 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:04.733517 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:04.733535 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:04.824195 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:04.824206 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:04.824217 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.400917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:07.410917 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:07.410975 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:07.437282 1176706 cri.go:89] found id: ""
	I1217 00:57:07.437303 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.437315 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:07.437325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:07.437414 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:07.466491 1176706 cri.go:89] found id: ""
	I1217 00:57:07.466506 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.466513 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:07.466518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:07.466585 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:07.491017 1176706 cri.go:89] found id: ""
	I1217 00:57:07.491030 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.491037 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:07.491042 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:07.491100 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:07.516269 1176706 cri.go:89] found id: ""
	I1217 00:57:07.516288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.516295 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:07.516301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:07.516370 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:07.541854 1176706 cri.go:89] found id: ""
	I1217 00:57:07.541867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.541874 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:07.541880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:07.541948 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:07.571479 1176706 cri.go:89] found id: ""
	I1217 00:57:07.571493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.571509 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:07.571516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:07.571576 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:07.597046 1176706 cri.go:89] found id: ""
	I1217 00:57:07.597072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.597079 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:07.597087 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:07.597097 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:07.672318 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:07.672336 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:07.672349 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.747576 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:07.747595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:07.779509 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:07.779525 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:07.855959 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:07.855980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.376085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:10.386576 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:10.386639 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:10.413996 1176706 cri.go:89] found id: ""
	I1217 00:57:10.414010 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.414017 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:10.414022 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:10.414082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:10.440046 1176706 cri.go:89] found id: ""
	I1217 00:57:10.440060 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.440067 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:10.440073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:10.440131 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:10.465533 1176706 cri.go:89] found id: ""
	I1217 00:57:10.465547 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.465563 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:10.465569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:10.465631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:10.491563 1176706 cri.go:89] found id: ""
	I1217 00:57:10.491577 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.491585 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:10.491590 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:10.491653 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:10.519680 1176706 cri.go:89] found id: ""
	I1217 00:57:10.519694 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.519710 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:10.519717 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:10.519778 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:10.556939 1176706 cri.go:89] found id: ""
	I1217 00:57:10.556956 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.556963 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:10.556969 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:10.557025 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:10.582061 1176706 cri.go:89] found id: ""
	I1217 00:57:10.582075 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.582082 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:10.582091 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:10.582102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:10.651854 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:10.651875 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.671002 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:10.671020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:10.744191 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:10.744201 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:10.744213 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:10.823224 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:10.823244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:13.353067 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:13.363299 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:13.363363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:13.388077 1176706 cri.go:89] found id: ""
	I1217 00:57:13.388090 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.388098 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:13.388103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:13.388166 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:13.414095 1176706 cri.go:89] found id: ""
	I1217 00:57:13.414109 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.414117 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:13.414122 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:13.414178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:13.439153 1176706 cri.go:89] found id: ""
	I1217 00:57:13.439167 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.439174 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:13.439180 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:13.439237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:13.465255 1176706 cri.go:89] found id: ""
	I1217 00:57:13.465269 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.465277 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:13.465282 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:13.465342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:13.495274 1176706 cri.go:89] found id: ""
	I1217 00:57:13.495288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.495295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:13.495301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:13.495359 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:13.520781 1176706 cri.go:89] found id: ""
	I1217 00:57:13.520795 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.520803 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:13.520808 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:13.520868 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:13.547934 1176706 cri.go:89] found id: ""
	I1217 00:57:13.547948 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.547955 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:13.547963 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:13.547974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:13.613843 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:13.613863 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:13.632465 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:13.632491 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:13.697651 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:13.697662 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:13.697673 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:13.766608 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:13.766627 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:16.302176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:16.312389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:16.312476 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:16.338447 1176706 cri.go:89] found id: ""
	I1217 00:57:16.338461 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.338468 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:16.338473 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:16.338533 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:16.365319 1176706 cri.go:89] found id: ""
	I1217 00:57:16.365333 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.365340 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:16.365346 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:16.365408 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:16.396455 1176706 cri.go:89] found id: ""
	I1217 00:57:16.396476 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.396483 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:16.396489 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:16.396550 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:16.425795 1176706 cri.go:89] found id: ""
	I1217 00:57:16.425809 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.425816 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:16.425822 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:16.425887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:16.454749 1176706 cri.go:89] found id: ""
	I1217 00:57:16.454763 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.454770 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:16.454776 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:16.454834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:16.479542 1176706 cri.go:89] found id: ""
	I1217 00:57:16.479555 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.479562 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:16.479567 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:16.479626 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:16.508783 1176706 cri.go:89] found id: ""
	I1217 00:57:16.508798 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.508805 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:16.508813 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:16.508824 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:16.577494 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:16.577515 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:16.595191 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:16.595211 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:16.665505 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:16.665516 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:16.665528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:16.733110 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:16.733132 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:19.271702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:19.282422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:19.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:19.310766 1176706 cri.go:89] found id: ""
	I1217 00:57:19.310781 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.310788 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:19.310794 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:19.310856 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:19.336393 1176706 cri.go:89] found id: ""
	I1217 00:57:19.336407 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.336435 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:19.336441 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:19.336512 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:19.363243 1176706 cri.go:89] found id: ""
	I1217 00:57:19.363258 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.363265 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:19.363270 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:19.363329 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:19.389985 1176706 cri.go:89] found id: ""
	I1217 00:57:19.390000 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.390007 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:19.390013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:19.390073 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:19.416019 1176706 cri.go:89] found id: ""
	I1217 00:57:19.416032 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.416040 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:19.416045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:19.416103 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:19.445523 1176706 cri.go:89] found id: ""
	I1217 00:57:19.445538 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.445545 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:19.445550 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:19.445611 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:19.470033 1176706 cri.go:89] found id: ""
	I1217 00:57:19.470047 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.470055 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:19.470063 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:19.470075 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:19.535642 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:19.535662 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:19.553701 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:19.553718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:19.615955 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:19.615966 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:19.615977 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:19.685077 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:19.685098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.217382 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:22.227714 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:22.227775 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:22.252242 1176706 cri.go:89] found id: ""
	I1217 00:57:22.252256 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.252263 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:22.252268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:22.252325 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:22.277476 1176706 cri.go:89] found id: ""
	I1217 00:57:22.277491 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.277498 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:22.277504 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:22.277561 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:22.302807 1176706 cri.go:89] found id: ""
	I1217 00:57:22.302821 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.302829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:22.302834 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:22.302905 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:22.332455 1176706 cri.go:89] found id: ""
	I1217 00:57:22.332469 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.332476 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:22.332483 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:22.332552 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:22.361365 1176706 cri.go:89] found id: ""
	I1217 00:57:22.361380 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.361387 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:22.361392 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:22.361453 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:22.387211 1176706 cri.go:89] found id: ""
	I1217 00:57:22.387224 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.387232 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:22.387237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:22.387297 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:22.413238 1176706 cri.go:89] found id: ""
	I1217 00:57:22.413252 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.413260 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:22.413267 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:22.413278 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:22.478085 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:22.478096 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:22.478105 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:22.546790 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:22.546813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.582711 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:22.582732 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:22.648758 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:22.648780 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.166726 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:25.177337 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:25.177400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:25.202561 1176706 cri.go:89] found id: ""
	I1217 00:57:25.202576 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.202583 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:25.202589 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:25.202650 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:25.231070 1176706 cri.go:89] found id: ""
	I1217 00:57:25.231085 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.231092 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:25.231098 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:25.231162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:25.256786 1176706 cri.go:89] found id: ""
	I1217 00:57:25.256799 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.256806 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:25.256811 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:25.256870 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:25.282392 1176706 cri.go:89] found id: ""
	I1217 00:57:25.282415 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.282423 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:25.282429 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:25.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:25.311168 1176706 cri.go:89] found id: ""
	I1217 00:57:25.311182 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.311189 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:25.311195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:25.311259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:25.339431 1176706 cri.go:89] found id: ""
	I1217 00:57:25.339446 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.339453 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:25.339459 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:25.339517 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:25.365122 1176706 cri.go:89] found id: ""
	I1217 00:57:25.365136 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.365144 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:25.365152 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:25.365162 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:25.430307 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:25.430326 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.447805 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:25.447822 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:25.515790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:25.515802 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:25.515813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:25.590022 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:25.590049 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.122003 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:28.132581 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:28.132644 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:28.158913 1176706 cri.go:89] found id: ""
	I1217 00:57:28.158927 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.158944 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:28.158950 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:28.159029 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:28.185443 1176706 cri.go:89] found id: ""
	I1217 00:57:28.185478 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.185486 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:28.185492 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:28.185565 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:28.212156 1176706 cri.go:89] found id: ""
	I1217 00:57:28.212180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.212187 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:28.212193 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:28.212303 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:28.238113 1176706 cri.go:89] found id: ""
	I1217 00:57:28.238128 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.238135 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:28.238140 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:28.238198 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:28.267252 1176706 cri.go:89] found id: ""
	I1217 00:57:28.267266 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.267273 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:28.267278 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:28.267335 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:28.299262 1176706 cri.go:89] found id: ""
	I1217 00:57:28.299277 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.299284 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:28.299290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:28.299349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:28.325216 1176706 cri.go:89] found id: ""
	I1217 00:57:28.325231 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.325247 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:28.325255 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:28.325267 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:28.342976 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:28.342992 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:28.411022 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:28.411033 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:28.411044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:28.479626 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:28.479647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.508235 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:28.508251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.075024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:31.085476 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:31.085543 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:31.115244 1176706 cri.go:89] found id: ""
	I1217 00:57:31.115259 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.115267 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:31.115272 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:31.115332 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:31.146093 1176706 cri.go:89] found id: ""
	I1217 00:57:31.146111 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.146119 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:31.146125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:31.146188 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:31.173490 1176706 cri.go:89] found id: ""
	I1217 00:57:31.173505 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.173512 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:31.173518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:31.173577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:31.199862 1176706 cri.go:89] found id: ""
	I1217 00:57:31.199876 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.199883 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:31.199889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:31.199953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:31.229151 1176706 cri.go:89] found id: ""
	I1217 00:57:31.229164 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.229172 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:31.229177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:31.229234 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:31.255292 1176706 cri.go:89] found id: ""
	I1217 00:57:31.255306 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.255313 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:31.255319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:31.255378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:31.280011 1176706 cri.go:89] found id: ""
	I1217 00:57:31.280024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.280032 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:31.280040 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:31.280050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:31.351624 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:31.351644 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:31.380210 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:31.380226 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.448265 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:31.448288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:31.466144 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:31.466161 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:31.530079 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.030804 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:34.041923 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:34.041984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:34.070601 1176706 cri.go:89] found id: ""
	I1217 00:57:34.070617 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.070624 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:34.070630 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:34.070689 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:34.097552 1176706 cri.go:89] found id: ""
	I1217 00:57:34.097566 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.097573 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:34.097579 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:34.097647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:34.124476 1176706 cri.go:89] found id: ""
	I1217 00:57:34.124490 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.124497 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:34.124503 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:34.124580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:34.150077 1176706 cri.go:89] found id: ""
	I1217 00:57:34.150091 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.150099 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:34.150104 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:34.150162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:34.176964 1176706 cri.go:89] found id: ""
	I1217 00:57:34.176978 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.176992 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:34.176998 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:34.177055 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:34.201831 1176706 cri.go:89] found id: ""
	I1217 00:57:34.201845 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.201852 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:34.201857 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:34.201914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:34.227100 1176706 cri.go:89] found id: ""
	I1217 00:57:34.227114 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.227122 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:34.227129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:34.227140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:34.292098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.292108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:34.292119 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:34.361262 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:34.361287 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:34.395072 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:34.395087 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:34.462475 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:34.462498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:36.980702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:36.992944 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:36.993003 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:37.025574 1176706 cri.go:89] found id: ""
	I1217 00:57:37.025592 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.025616 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:37.025622 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:37.025707 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:37.054876 1176706 cri.go:89] found id: ""
	I1217 00:57:37.054890 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.054897 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:37.054903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:37.054968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:37.084974 1176706 cri.go:89] found id: ""
	I1217 00:57:37.084987 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.084995 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:37.085000 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:37.085059 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:37.110853 1176706 cri.go:89] found id: ""
	I1217 00:57:37.110867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.110874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:37.110883 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:37.110941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:37.137064 1176706 cri.go:89] found id: ""
	I1217 00:57:37.137083 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.137090 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:37.137096 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:37.137159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:37.167116 1176706 cri.go:89] found id: ""
	I1217 00:57:37.167130 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.167148 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:37.167162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:37.167230 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:37.192827 1176706 cri.go:89] found id: ""
	I1217 00:57:37.192848 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.192856 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:37.192863 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:37.192874 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:37.210956 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:37.210974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:37.275882 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:37.275893 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:37.275904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:37.344194 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:37.344215 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:37.375642 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:37.375658 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:39.944605 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:39.954951 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:39.955014 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:39.984302 1176706 cri.go:89] found id: ""
	I1217 00:57:39.984316 1176706 logs.go:282] 0 containers: []
	W1217 00:57:39.984323 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:39.984328 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:39.984383 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:40.029442 1176706 cri.go:89] found id: ""
	I1217 00:57:40.029458 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.029466 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:40.029471 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:40.029538 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:40.063022 1176706 cri.go:89] found id: ""
	I1217 00:57:40.063037 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.063044 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:40.063049 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:40.063110 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:40.094257 1176706 cri.go:89] found id: ""
	I1217 00:57:40.094272 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.094280 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:40.094286 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:40.094349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:40.127887 1176706 cri.go:89] found id: ""
	I1217 00:57:40.127901 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.127908 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:40.127913 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:40.127972 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:40.155475 1176706 cri.go:89] found id: ""
	I1217 00:57:40.155489 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.155496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:40.155502 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:40.155560 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:40.181940 1176706 cri.go:89] found id: ""
	I1217 00:57:40.181955 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.181962 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:40.181970 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:40.181980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:40.254464 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:40.254484 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:40.285810 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:40.285825 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:40.352509 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:40.352528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:40.370334 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:40.370356 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:40.432624 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:42.932898 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:42.943186 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:42.943245 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:42.971121 1176706 cri.go:89] found id: ""
	I1217 00:57:42.971137 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.971144 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:42.971149 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:42.971207 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:42.997154 1176706 cri.go:89] found id: ""
	I1217 00:57:42.997169 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.997175 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:42.997181 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:42.997240 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:43.034752 1176706 cri.go:89] found id: ""
	I1217 00:57:43.034767 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.034775 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:43.034781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:43.034840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:43.064326 1176706 cri.go:89] found id: ""
	I1217 00:57:43.064339 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.064347 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:43.064352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:43.064428 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:43.095997 1176706 cri.go:89] found id: ""
	I1217 00:57:43.096011 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.096019 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:43.096024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:43.096082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:43.126545 1176706 cri.go:89] found id: ""
	I1217 00:57:43.126560 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.126568 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:43.126573 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:43.126633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:43.157043 1176706 cri.go:89] found id: ""
	I1217 00:57:43.157058 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.157065 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:43.157073 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:43.157102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:43.223228 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:43.223248 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:43.241053 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:43.241070 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:43.307388 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:43.307398 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:43.307409 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:43.376649 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:43.376669 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:45.908814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:45.918992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:45.919051 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:45.944157 1176706 cri.go:89] found id: ""
	I1217 00:57:45.944170 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.944178 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:45.944183 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:45.944242 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:45.969417 1176706 cri.go:89] found id: ""
	I1217 00:57:45.969431 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.969438 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:45.969444 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:45.969502 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:45.995472 1176706 cri.go:89] found id: ""
	I1217 00:57:45.995486 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.995494 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:45.995499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:45.995566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:46.034994 1176706 cri.go:89] found id: ""
	I1217 00:57:46.035007 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.035015 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:46.035020 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:46.035081 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:46.065460 1176706 cri.go:89] found id: ""
	I1217 00:57:46.065473 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.065480 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:46.065486 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:46.065559 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:46.092450 1176706 cri.go:89] found id: ""
	I1217 00:57:46.092465 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.092472 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:46.092478 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:46.092557 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:46.122198 1176706 cri.go:89] found id: ""
	I1217 00:57:46.122212 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.122221 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:46.122229 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:46.122241 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:46.140129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:46.140147 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:46.204790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:46.204800 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:46.204810 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:46.273034 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:46.273054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:46.300763 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:46.300778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:48.875764 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:48.886304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:48.886369 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:48.923231 1176706 cri.go:89] found id: ""
	I1217 00:57:48.923246 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.923254 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:48.923259 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:48.923334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:48.951521 1176706 cri.go:89] found id: ""
	I1217 00:57:48.951536 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.951544 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:48.951549 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:48.951610 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:48.977574 1176706 cri.go:89] found id: ""
	I1217 00:57:48.977588 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.977595 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:48.977600 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:48.977661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:49.016389 1176706 cri.go:89] found id: ""
	I1217 00:57:49.016402 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.016410 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:49.016446 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:49.016511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:49.050180 1176706 cri.go:89] found id: ""
	I1217 00:57:49.050193 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.050201 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:49.050206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:49.050271 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:49.088387 1176706 cri.go:89] found id: ""
	I1217 00:57:49.088401 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.088409 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:49.088445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:49.088508 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:49.118579 1176706 cri.go:89] found id: ""
	I1217 00:57:49.118593 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.118600 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:49.118608 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:49.118618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:49.189917 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:49.189938 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:49.208217 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:49.208234 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:49.270961 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:49.270977 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:49.270988 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:49.340033 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:49.340054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:51.873428 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:51.883781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:51.883840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:51.908479 1176706 cri.go:89] found id: ""
	I1217 00:57:51.908493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.908500 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:51.908505 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:51.908562 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:51.938045 1176706 cri.go:89] found id: ""
	I1217 00:57:51.938061 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.938068 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:51.938073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:51.938135 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:51.964570 1176706 cri.go:89] found id: ""
	I1217 00:57:51.964585 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.964592 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:51.964597 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:51.964654 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:51.989700 1176706 cri.go:89] found id: ""
	I1217 00:57:51.989714 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.989722 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:51.989727 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:51.989784 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:52.030756 1176706 cri.go:89] found id: ""
	I1217 00:57:52.030771 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.030779 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:52.030786 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:52.030860 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:52.067806 1176706 cri.go:89] found id: ""
	I1217 00:57:52.067829 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.067838 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:52.067845 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:52.067915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:52.097071 1176706 cri.go:89] found id: ""
	I1217 00:57:52.097102 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.097110 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:52.097118 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:52.097128 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:52.169931 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:52.169952 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:52.202012 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:52.202031 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:52.267897 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:52.267917 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:52.286898 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:52.286920 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:52.352095 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:54.853773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:54.863649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:54.863712 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:54.888435 1176706 cri.go:89] found id: ""
	I1217 00:57:54.888449 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.888456 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:54.888462 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:54.888523 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:54.927009 1176706 cri.go:89] found id: ""
	I1217 00:57:54.927024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.927031 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:54.927037 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:54.927095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:54.953405 1176706 cri.go:89] found id: ""
	I1217 00:57:54.953420 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.953428 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:54.953434 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:54.953493 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:54.979162 1176706 cri.go:89] found id: ""
	I1217 00:57:54.979176 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.979183 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:54.979189 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:54.979256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:55.025542 1176706 cri.go:89] found id: ""
	I1217 00:57:55.025564 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.025572 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:55.025577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:55.025641 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:55.059408 1176706 cri.go:89] found id: ""
	I1217 00:57:55.059422 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.059429 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:55.059435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:55.059492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:55.085846 1176706 cri.go:89] found id: ""
	I1217 00:57:55.085860 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.085867 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:55.085875 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:55.085884 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:55.154061 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:55.154083 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:55.182650 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:55.182667 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:55.252924 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:55.252945 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:55.271464 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:55.271481 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:55.340175 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:57.840461 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:57.853057 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:57.853178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:57.883066 1176706 cri.go:89] found id: ""
	I1217 00:57:57.883081 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.883088 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:57.883094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:57.883152 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:57.909166 1176706 cri.go:89] found id: ""
	I1217 00:57:57.909180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.909189 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:57.909195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:57.909255 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:57.935701 1176706 cri.go:89] found id: ""
	I1217 00:57:57.935716 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.935733 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:57.935739 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:57.935805 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:57.969374 1176706 cri.go:89] found id: ""
	I1217 00:57:57.969397 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.969404 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:57.969410 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:57.969481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:57.995365 1176706 cri.go:89] found id: ""
	I1217 00:57:57.995379 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.995397 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:57.995404 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:57.995460 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:58.025187 1176706 cri.go:89] found id: ""
	I1217 00:57:58.025207 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.025215 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:58.025221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:58.025343 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:58.062705 1176706 cri.go:89] found id: ""
	I1217 00:57:58.062719 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.062738 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:58.062745 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:58.062755 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:58.135108 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:58.135129 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:58.154038 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:58.154058 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:58.219558 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:58.219569 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:58.219582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:58.287658 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:58.287678 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:00.817470 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:00.827992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:00.828056 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:00.852955 1176706 cri.go:89] found id: ""
	I1217 00:58:00.852969 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.852976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:00.852983 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:00.853043 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:00.877725 1176706 cri.go:89] found id: ""
	I1217 00:58:00.877739 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.877746 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:00.877751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:00.877811 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:00.901883 1176706 cri.go:89] found id: ""
	I1217 00:58:00.901897 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.901905 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:00.901910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:00.901965 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:00.928695 1176706 cri.go:89] found id: ""
	I1217 00:58:00.928709 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.928716 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:00.928722 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:00.928780 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:00.953517 1176706 cri.go:89] found id: ""
	I1217 00:58:00.953531 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.953538 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:00.953544 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:00.953601 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:00.982916 1176706 cri.go:89] found id: ""
	I1217 00:58:00.982930 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.982946 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:00.982952 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:00.983021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:01.012486 1176706 cri.go:89] found id: ""
	I1217 00:58:01.012510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:01.012518 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:01.012526 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:01.012538 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:01.034573 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:01.034595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:01.107160 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:01.107170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:01.107180 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:01.180136 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:01.180158 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:01.212434 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:01.212451 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:03.780773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:03.791245 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:03.791309 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:03.819281 1176706 cri.go:89] found id: ""
	I1217 00:58:03.819296 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.819304 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:03.819309 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:03.819367 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:03.847330 1176706 cri.go:89] found id: ""
	I1217 00:58:03.847344 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.847351 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:03.847357 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:03.847416 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:03.874793 1176706 cri.go:89] found id: ""
	I1217 00:58:03.874806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.874814 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:03.874819 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:03.874883 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:03.904651 1176706 cri.go:89] found id: ""
	I1217 00:58:03.904665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.904672 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:03.904678 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:03.904744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:03.930157 1176706 cri.go:89] found id: ""
	I1217 00:58:03.930178 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.930186 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:03.930191 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:03.930252 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:03.960348 1176706 cri.go:89] found id: ""
	I1217 00:58:03.960371 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.960380 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:03.960386 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:03.960473 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:03.985501 1176706 cri.go:89] found id: ""
	I1217 00:58:03.985515 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.985523 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:03.985530 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:03.985541 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:04.005563 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:04.005592 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:04.085204 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:04.085219 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:04.085231 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:04.154363 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:04.154385 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:04.182481 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:04.182498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:06.754413 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:06.765192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:06.765266 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:06.792761 1176706 cri.go:89] found id: ""
	I1217 00:58:06.792779 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.792786 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:06.792791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:06.792850 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:06.817882 1176706 cri.go:89] found id: ""
	I1217 00:58:06.817896 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.817903 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:06.817909 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:06.817967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:06.843295 1176706 cri.go:89] found id: ""
	I1217 00:58:06.843309 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.843316 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:06.843321 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:06.843380 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:06.871025 1176706 cri.go:89] found id: ""
	I1217 00:58:06.871039 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.871046 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:06.871052 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:06.871109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:06.899109 1176706 cri.go:89] found id: ""
	I1217 00:58:06.899124 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.899132 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:06.899137 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:06.899212 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:06.923948 1176706 cri.go:89] found id: ""
	I1217 00:58:06.923962 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.923980 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:06.923987 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:06.924045 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:06.948813 1176706 cri.go:89] found id: ""
	I1217 00:58:06.948827 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.948834 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:06.948842 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:06.948853 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:07.015114 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:07.015140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:07.034991 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:07.035010 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:07.105757 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:07.105767 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:07.105778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:07.177693 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:07.177717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:09.709755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:09.720409 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:09.720507 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:09.745603 1176706 cri.go:89] found id: ""
	I1217 00:58:09.745618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.745626 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:09.745631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:09.745691 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:09.775492 1176706 cri.go:89] found id: ""
	I1217 00:58:09.775507 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.775515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:09.775520 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:09.775579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:09.801149 1176706 cri.go:89] found id: ""
	I1217 00:58:09.801164 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.801171 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:09.801177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:09.801238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:09.830147 1176706 cri.go:89] found id: ""
	I1217 00:58:09.830160 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.830168 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:09.830173 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:09.830232 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:09.858791 1176706 cri.go:89] found id: ""
	I1217 00:58:09.858806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.858825 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:09.858832 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:09.858911 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:09.884827 1176706 cri.go:89] found id: ""
	I1217 00:58:09.884842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.884849 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:09.884855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:09.884918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:09.910380 1176706 cri.go:89] found id: ""
	I1217 00:58:09.910394 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.910402 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:09.910409 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:09.910420 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:09.976905 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:09.976924 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:09.995004 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:09.995027 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:10.084593 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:10.084604 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:10.084614 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:10.157583 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:10.157604 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.691225 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:12.701275 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:12.701340 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:12.730986 1176706 cri.go:89] found id: ""
	I1217 00:58:12.731000 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.731018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:12.731024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:12.731084 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:12.757010 1176706 cri.go:89] found id: ""
	I1217 00:58:12.757029 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.757037 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:12.757045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:12.757119 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:12.782232 1176706 cri.go:89] found id: ""
	I1217 00:58:12.782245 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.782252 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:12.782257 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:12.782314 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:12.808352 1176706 cri.go:89] found id: ""
	I1217 00:58:12.808366 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.808373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:12.808378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:12.808472 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:12.834094 1176706 cri.go:89] found id: ""
	I1217 00:58:12.834109 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.834116 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:12.834121 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:12.834184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:12.861537 1176706 cri.go:89] found id: ""
	I1217 00:58:12.861551 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.861558 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:12.861564 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:12.861625 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:12.891320 1176706 cri.go:89] found id: ""
	I1217 00:58:12.891334 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.891351 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:12.891360 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:12.891373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:12.961252 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:12.961272 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.990873 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:12.990889 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:13.068166 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:13.068185 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:13.087641 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:13.087660 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:13.158967 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:15.660635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:15.670593 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:15.670685 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:15.695674 1176706 cri.go:89] found id: ""
	I1217 00:58:15.695688 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.695695 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:15.695700 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:15.695757 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:15.723007 1176706 cri.go:89] found id: ""
	I1217 00:58:15.723020 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.723028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:15.723033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:15.723093 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:15.752134 1176706 cri.go:89] found id: ""
	I1217 00:58:15.752149 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.752156 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:15.752161 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:15.752219 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:15.777521 1176706 cri.go:89] found id: ""
	I1217 00:58:15.777535 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.777542 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:15.777547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:15.777606 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:15.805205 1176706 cri.go:89] found id: ""
	I1217 00:58:15.805220 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.805233 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:15.805239 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:15.805296 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:15.830102 1176706 cri.go:89] found id: ""
	I1217 00:58:15.830116 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.830123 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:15.830129 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:15.830191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:15.859258 1176706 cri.go:89] found id: ""
	I1217 00:58:15.859272 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.859279 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:15.859297 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:15.859307 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:15.924910 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:15.924930 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:15.943203 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:15.943219 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:16.011016 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:16.011027 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:16.011038 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:16.094076 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:16.094096 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:18.624032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:18.634861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:18.634925 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:18.660502 1176706 cri.go:89] found id: ""
	I1217 00:58:18.660528 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.660536 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:18.660541 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:18.660600 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:18.685828 1176706 cri.go:89] found id: ""
	I1217 00:58:18.685841 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.685848 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:18.685854 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:18.685920 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:18.716173 1176706 cri.go:89] found id: ""
	I1217 00:58:18.716187 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.716194 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:18.716199 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:18.716260 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:18.742960 1176706 cri.go:89] found id: ""
	I1217 00:58:18.742975 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.742983 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:18.742988 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:18.743046 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:18.768597 1176706 cri.go:89] found id: ""
	I1217 00:58:18.768610 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.768623 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:18.768628 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:18.768687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:18.795244 1176706 cri.go:89] found id: ""
	I1217 00:58:18.795267 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.795276 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:18.795281 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:18.795355 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:18.826316 1176706 cri.go:89] found id: ""
	I1217 00:58:18.826330 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.826337 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:18.826345 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:18.826354 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:18.892936 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:18.892954 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:18.911274 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:18.911292 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:18.973399 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:18.973409 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:18.973432 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:19.052103 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:19.052124 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.589056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:21.599320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:21.599382 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:21.626547 1176706 cri.go:89] found id: ""
	I1217 00:58:21.626561 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.626568 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:21.626574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:21.626631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:21.651881 1176706 cri.go:89] found id: ""
	I1217 00:58:21.651895 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.651902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:21.651910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:21.651967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:21.677496 1176706 cri.go:89] found id: ""
	I1217 00:58:21.677510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.677519 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:21.677524 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:21.677580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:21.701536 1176706 cri.go:89] found id: ""
	I1217 00:58:21.701550 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.701557 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:21.701562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:21.701619 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:21.725663 1176706 cri.go:89] found id: ""
	I1217 00:58:21.725677 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.725695 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:21.725701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:21.725772 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:21.749912 1176706 cri.go:89] found id: ""
	I1217 00:58:21.749926 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.749937 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:21.749943 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:21.750000 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:21.774360 1176706 cri.go:89] found id: ""
	I1217 00:58:21.774374 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.774381 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:21.774389 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:21.774399 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:21.841964 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:21.841983 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.870200 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:21.870218 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:21.943734 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:21.943754 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:21.961798 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:21.961816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:22.037147 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.537433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:24.547596 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:24.547661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:24.575282 1176706 cri.go:89] found id: ""
	I1217 00:58:24.575297 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.575306 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:24.575312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:24.575371 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:24.600578 1176706 cri.go:89] found id: ""
	I1217 00:58:24.600592 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.600599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:24.600604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:24.600665 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:24.626604 1176706 cri.go:89] found id: ""
	I1217 00:58:24.626618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.626626 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:24.626631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:24.626687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:24.652284 1176706 cri.go:89] found id: ""
	I1217 00:58:24.652298 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.652316 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:24.652323 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:24.652381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:24.681413 1176706 cri.go:89] found id: ""
	I1217 00:58:24.681426 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.681433 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:24.681439 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:24.681495 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:24.709801 1176706 cri.go:89] found id: ""
	I1217 00:58:24.709815 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.709822 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:24.709830 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:24.709887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:24.741982 1176706 cri.go:89] found id: ""
	I1217 00:58:24.741995 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.742010 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:24.742018 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:24.742029 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:24.806559 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.806571 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:24.806581 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:24.875943 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:24.875962 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:24.904944 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:24.904960 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:24.972857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:24.972878 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.491741 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:27.502162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:27.502241 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:27.528328 1176706 cri.go:89] found id: ""
	I1217 00:58:27.528343 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.528350 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:27.528356 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:27.528455 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:27.558520 1176706 cri.go:89] found id: ""
	I1217 00:58:27.558534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.558541 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:27.558547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:27.558605 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:27.587047 1176706 cri.go:89] found id: ""
	I1217 00:58:27.587061 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.587070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:27.587075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:27.587133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:27.615351 1176706 cri.go:89] found id: ""
	I1217 00:58:27.615365 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.615373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:27.615381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:27.615443 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:27.640936 1176706 cri.go:89] found id: ""
	I1217 00:58:27.640950 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.640959 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:27.640964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:27.641021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:27.667985 1176706 cri.go:89] found id: ""
	I1217 00:58:27.667999 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.668007 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:27.668013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:27.668077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:27.694148 1176706 cri.go:89] found id: ""
	I1217 00:58:27.694162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.694170 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:27.694177 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:27.694188 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:27.764618 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:27.764639 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.784025 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:27.784040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:27.852310 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:27.852320 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:27.852331 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:27.922044 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:27.922065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:30.450766 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:30.460791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:30.460852 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:30.485997 1176706 cri.go:89] found id: ""
	I1217 00:58:30.486011 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.486018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:30.486023 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:30.486080 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:30.512125 1176706 cri.go:89] found id: ""
	I1217 00:58:30.512138 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.512157 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:30.512163 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:30.512221 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:30.538512 1176706 cri.go:89] found id: ""
	I1217 00:58:30.538526 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.538533 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:30.538539 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:30.538597 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:30.564757 1176706 cri.go:89] found id: ""
	I1217 00:58:30.564771 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.564778 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:30.564784 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:30.564842 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:30.594808 1176706 cri.go:89] found id: ""
	I1217 00:58:30.594821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.594840 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:30.594846 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:30.594919 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:30.624595 1176706 cri.go:89] found id: ""
	I1217 00:58:30.624609 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.624617 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:30.624623 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:30.624683 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:30.653013 1176706 cri.go:89] found id: ""
	I1217 00:58:30.653027 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.653034 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:30.653042 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:30.653052 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:30.720030 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:30.720050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:30.738237 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:30.738255 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:30.801692 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:30.801705 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:30.801717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:30.870606 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:30.870628 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.401439 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:33.411804 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:33.411865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:33.437664 1176706 cri.go:89] found id: ""
	I1217 00:58:33.437678 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.437686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:33.437692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:33.437752 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:33.463774 1176706 cri.go:89] found id: ""
	I1217 00:58:33.463796 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.463803 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:33.463809 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:33.463865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:33.492800 1176706 cri.go:89] found id: ""
	I1217 00:58:33.492822 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.492829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:33.492835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:33.492896 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:33.518396 1176706 cri.go:89] found id: ""
	I1217 00:58:33.518410 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.518417 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:33.518422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:33.518481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:33.545369 1176706 cri.go:89] found id: ""
	I1217 00:58:33.545385 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.545393 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:33.545398 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:33.545469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:33.571642 1176706 cri.go:89] found id: ""
	I1217 00:58:33.571665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.571673 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:33.571679 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:33.571751 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:33.598928 1176706 cri.go:89] found id: ""
	I1217 00:58:33.598953 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.598961 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:33.598970 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:33.598980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:33.617218 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:33.617237 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:33.681042 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:33.681053 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:33.681064 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:33.750561 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:33.750582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.779618 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:33.779637 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.351872 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:36.361748 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:36.361812 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:36.387484 1176706 cri.go:89] found id: ""
	I1217 00:58:36.387498 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.387505 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:36.387511 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:36.387567 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:36.413880 1176706 cri.go:89] found id: ""
	I1217 00:58:36.413894 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.413902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:36.413922 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:36.413979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:36.439073 1176706 cri.go:89] found id: ""
	I1217 00:58:36.439087 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.439095 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:36.439100 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:36.439159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:36.464148 1176706 cri.go:89] found id: ""
	I1217 00:58:36.464162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.464169 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:36.464175 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:36.464237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:36.489659 1176706 cri.go:89] found id: ""
	I1217 00:58:36.489673 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.489681 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:36.489686 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:36.489744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:36.514865 1176706 cri.go:89] found id: ""
	I1217 00:58:36.514879 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.514887 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:36.514892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:36.514953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:36.545081 1176706 cri.go:89] found id: ""
	I1217 00:58:36.545095 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.545103 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:36.545110 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:36.545120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:36.620571 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:36.620599 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:36.652294 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:36.652313 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.720685 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:36.720708 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:36.738692 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:36.738709 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:36.804409 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.304571 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:39.315407 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:39.315469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:39.343748 1176706 cri.go:89] found id: ""
	I1217 00:58:39.343762 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.343769 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:39.343775 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:39.343834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:39.371633 1176706 cri.go:89] found id: ""
	I1217 00:58:39.371648 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.371655 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:39.371661 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:39.371750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:39.397168 1176706 cri.go:89] found id: ""
	I1217 00:58:39.397183 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.397190 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:39.397196 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:39.397254 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:39.422379 1176706 cri.go:89] found id: ""
	I1217 00:58:39.422393 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.422400 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:39.422406 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:39.422466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:39.451362 1176706 cri.go:89] found id: ""
	I1217 00:58:39.451376 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.451384 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:39.451389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:39.451447 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:39.476838 1176706 cri.go:89] found id: ""
	I1217 00:58:39.476852 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.476862 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:39.476867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:39.476926 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:39.501892 1176706 cri.go:89] found id: ""
	I1217 00:58:39.501905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.501912 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:39.501924 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:39.501933 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:39.571771 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.571783 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:39.571793 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:39.642123 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:39.642144 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:39.673585 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:39.673602 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:39.742217 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:39.742236 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.260825 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:42.274064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:42.274140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:42.315322 1176706 cri.go:89] found id: ""
	I1217 00:58:42.315336 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.315346 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:42.315352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:42.315432 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:42.348891 1176706 cri.go:89] found id: ""
	I1217 00:58:42.348906 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.348914 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:42.348920 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:42.348984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:42.376853 1176706 cri.go:89] found id: ""
	I1217 00:58:42.376867 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.376874 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:42.376880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:42.376940 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:42.402292 1176706 cri.go:89] found id: ""
	I1217 00:58:42.402307 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.402315 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:42.402320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:42.402381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:42.432293 1176706 cri.go:89] found id: ""
	I1217 00:58:42.432306 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.432314 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:42.432319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:42.432378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:42.459173 1176706 cri.go:89] found id: ""
	I1217 00:58:42.459188 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.459195 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:42.459200 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:42.459259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:42.485520 1176706 cri.go:89] found id: ""
	I1217 00:58:42.485534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.485541 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:42.485549 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:42.485562 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:42.553260 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:42.553281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.571244 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:42.571261 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:42.633598 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:42.633609 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:42.633622 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:42.706387 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:42.706408 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.237565 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:45.259348 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:45.259429 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:45.305574 1176706 cri.go:89] found id: ""
	I1217 00:58:45.305589 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.305597 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:45.305602 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:45.305664 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:45.349162 1176706 cri.go:89] found id: ""
	I1217 00:58:45.349177 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.349187 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:45.349192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:45.349256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:45.376828 1176706 cri.go:89] found id: ""
	I1217 00:58:45.376842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.376849 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:45.376855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:45.376915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:45.402470 1176706 cri.go:89] found id: ""
	I1217 00:58:45.402485 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.402492 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:45.402497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:45.402554 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:45.429756 1176706 cri.go:89] found id: ""
	I1217 00:58:45.429790 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.429820 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:45.429842 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:45.429980 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:45.459618 1176706 cri.go:89] found id: ""
	I1217 00:58:45.459632 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.459640 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:45.459647 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:45.459709 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:45.486504 1176706 cri.go:89] found id: ""
	I1217 00:58:45.486518 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.486526 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:45.486533 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:45.486549 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:45.505026 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:45.505044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:45.569592 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:45.569602 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:45.569612 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:45.642249 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:45.642270 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.673783 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:45.673799 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.241441 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:48.253986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:48.254052 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:48.298549 1176706 cri.go:89] found id: ""
	I1217 00:58:48.298562 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.298569 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:48.298575 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:48.298633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:48.329982 1176706 cri.go:89] found id: ""
	I1217 00:58:48.329997 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.330004 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:48.330010 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:48.330068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:48.356278 1176706 cri.go:89] found id: ""
	I1217 00:58:48.356291 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.356298 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:48.356304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:48.356363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:48.381931 1176706 cri.go:89] found id: ""
	I1217 00:58:48.381944 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.381952 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:48.381957 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:48.382012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:48.408076 1176706 cri.go:89] found id: ""
	I1217 00:58:48.408091 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.408098 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:48.408103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:48.408167 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:48.438515 1176706 cri.go:89] found id: ""
	I1217 00:58:48.438529 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.438536 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:48.438542 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:48.438615 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:48.464771 1176706 cri.go:89] found id: ""
	I1217 00:58:48.464784 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.464791 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:48.464800 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:48.464815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.531756 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:48.531777 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:48.550180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:48.550197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:48.614503 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:48.614514 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:48.614524 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:48.683497 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:48.683519 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:51.214024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:51.224516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:51.224581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:51.250104 1176706 cri.go:89] found id: ""
	I1217 00:58:51.250118 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.250125 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:51.250131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:51.250204 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:51.287241 1176706 cri.go:89] found id: ""
	I1217 00:58:51.287255 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.287263 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:51.287268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:51.287334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:51.326285 1176706 cri.go:89] found id: ""
	I1217 00:58:51.326299 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.326306 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:51.326312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:51.326375 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:51.353495 1176706 cri.go:89] found id: ""
	I1217 00:58:51.353509 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.353516 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:51.353521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:51.353577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:51.379404 1176706 cri.go:89] found id: ""
	I1217 00:58:51.379417 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.379425 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:51.379430 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:51.379489 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:51.405891 1176706 cri.go:89] found id: ""
	I1217 00:58:51.405905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.405912 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:51.405919 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:51.405979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:51.431497 1176706 cri.go:89] found id: ""
	I1217 00:58:51.431510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.431529 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:51.431537 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:51.431547 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:51.497786 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:51.497805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:51.516101 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:51.516120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:51.584128 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:51.584139 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:51.584150 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:51.652739 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:51.652760 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:54.182755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:54.194058 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:54.194127 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:54.219806 1176706 cri.go:89] found id: ""
	I1217 00:58:54.219821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.219828 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:54.219833 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:54.219894 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:54.245268 1176706 cri.go:89] found id: ""
	I1217 00:58:54.245281 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.245289 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:54.245294 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:54.245353 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:54.278676 1176706 cri.go:89] found id: ""
	I1217 00:58:54.278690 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.278697 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:54.278703 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:54.278766 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:54.305307 1176706 cri.go:89] found id: ""
	I1217 00:58:54.305321 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.305329 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:54.305334 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:54.305400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:54.330666 1176706 cri.go:89] found id: ""
	I1217 00:58:54.330680 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.330688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:54.330693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:54.330763 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:54.356855 1176706 cri.go:89] found id: ""
	I1217 00:58:54.356875 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.356886 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:54.356892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:54.356985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:54.390389 1176706 cri.go:89] found id: ""
	I1217 00:58:54.390404 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.390411 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:54.390419 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:54.390429 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:54.456633 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:54.456654 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:54.474716 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:54.474734 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:54.542032 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:54.542052 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:54.542063 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:54.614689 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:54.614710 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:57.146377 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:57.156881 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:57.156942 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:57.181785 1176706 cri.go:89] found id: ""
	I1217 00:58:57.181800 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.181808 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:57.181813 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:57.181869 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:57.208021 1176706 cri.go:89] found id: ""
	I1217 00:58:57.208046 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.208059 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:57.208065 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:57.208133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:57.235483 1176706 cri.go:89] found id: ""
	I1217 00:58:57.235497 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.235505 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:57.235510 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:57.235569 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:57.269950 1176706 cri.go:89] found id: ""
	I1217 00:58:57.269972 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.269980 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:57.269986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:57.270063 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:57.296896 1176706 cri.go:89] found id: ""
	I1217 00:58:57.296911 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.296918 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:57.296924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:57.296983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:57.325435 1176706 cri.go:89] found id: ""
	I1217 00:58:57.325452 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.325462 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:57.325468 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:57.325526 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:57.350942 1176706 cri.go:89] found id: ""
	I1217 00:58:57.350957 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.350965 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:57.350973 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:57.350982 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:57.416866 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:57.416886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:57.434717 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:57.434736 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:57.499393 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:57.499403 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:57.499414 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:57.567648 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:57.567668 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:00.097029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:00.143893 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:00.143993 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:00.260647 1176706 cri.go:89] found id: ""
	I1217 00:59:00.262402 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.262438 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:00.262449 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:00.262564 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:00.330718 1176706 cri.go:89] found id: ""
	I1217 00:59:00.330734 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.330745 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:00.330751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:00.330862 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:00.372603 1176706 cri.go:89] found id: ""
	I1217 00:59:00.372630 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.372638 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:00.372645 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:00.372721 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:00.403443 1176706 cri.go:89] found id: ""
	I1217 00:59:00.403469 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.403478 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:00.403484 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:00.403558 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:00.432235 1176706 cri.go:89] found id: ""
	I1217 00:59:00.432260 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.432268 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:00.432274 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:00.432341 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:00.464475 1176706 cri.go:89] found id: ""
	I1217 00:59:00.464489 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.464496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:00.464501 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:00.464563 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:00.494126 1176706 cri.go:89] found id: ""
	I1217 00:59:00.494156 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.494164 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:00.494172 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:00.494182 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:00.564811 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:00.564831 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:00.582720 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:00.582738 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:00.643909 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:00.643921 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:00.643931 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:00.716875 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:00.716895 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.245660 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:03.256968 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:03.257032 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:03.283953 1176706 cri.go:89] found id: ""
	I1217 00:59:03.283968 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.283976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:03.283981 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:03.284041 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:03.313015 1176706 cri.go:89] found id: ""
	I1217 00:59:03.313029 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.313036 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:03.313041 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:03.313098 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:03.341219 1176706 cri.go:89] found id: ""
	I1217 00:59:03.341233 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.341241 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:03.341246 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:03.341304 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:03.366417 1176706 cri.go:89] found id: ""
	I1217 00:59:03.366430 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.366437 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:03.366443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:03.366499 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:03.395548 1176706 cri.go:89] found id: ""
	I1217 00:59:03.395561 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.395568 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:03.395574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:03.395631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:03.425673 1176706 cri.go:89] found id: ""
	I1217 00:59:03.425687 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.425694 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:03.425699 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:03.425758 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:03.452761 1176706 cri.go:89] found id: ""
	I1217 00:59:03.452775 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.452782 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:03.452790 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:03.452813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:03.470985 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:03.471004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:03.539585 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:03.539606 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:03.539617 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:03.608766 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:03.608787 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.641472 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:03.641487 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.214627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:06.225029 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:06.225095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:06.262898 1176706 cri.go:89] found id: ""
	I1217 00:59:06.262912 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.262919 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:06.262924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:06.262979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:06.295811 1176706 cri.go:89] found id: ""
	I1217 00:59:06.295825 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.295832 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:06.295837 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:06.295900 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:06.325305 1176706 cri.go:89] found id: ""
	I1217 00:59:06.325319 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.325326 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:06.325331 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:06.325388 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:06.350976 1176706 cri.go:89] found id: ""
	I1217 00:59:06.350990 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.350997 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:06.351002 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:06.351061 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:06.381013 1176706 cri.go:89] found id: ""
	I1217 00:59:06.381027 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.381034 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:06.381040 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:06.381156 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:06.407543 1176706 cri.go:89] found id: ""
	I1217 00:59:06.407556 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.407564 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:06.407569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:06.407627 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:06.435419 1176706 cri.go:89] found id: ""
	I1217 00:59:06.435433 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.435440 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:06.435448 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:06.435460 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:06.472071 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:06.472098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.540915 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:06.540936 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:06.558800 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:06.558816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:06.626144 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:06.626156 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:06.626167 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.199032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:09.210273 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:09.210345 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:09.238454 1176706 cri.go:89] found id: ""
	I1217 00:59:09.238468 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.238475 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:09.238481 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:09.238539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:09.283355 1176706 cri.go:89] found id: ""
	I1217 00:59:09.283369 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.283377 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:09.283382 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:09.283452 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:09.322894 1176706 cri.go:89] found id: ""
	I1217 00:59:09.322909 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.322917 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:09.322924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:09.322983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:09.349261 1176706 cri.go:89] found id: ""
	I1217 00:59:09.349275 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.349282 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:09.349290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:09.349348 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:09.375365 1176706 cri.go:89] found id: ""
	I1217 00:59:09.375381 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.375390 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:09.375395 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:09.375458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:09.404751 1176706 cri.go:89] found id: ""
	I1217 00:59:09.404765 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.404773 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:09.404778 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:09.404840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:09.430184 1176706 cri.go:89] found id: ""
	I1217 00:59:09.430198 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.430206 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:09.430214 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:09.430224 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:09.496857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:09.496876 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:09.515406 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:09.515423 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:09.581087 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:09.581098 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:09.581109 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.650268 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:09.650288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.181362 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:12.192867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:12.192928 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:12.219737 1176706 cri.go:89] found id: ""
	I1217 00:59:12.219750 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.219757 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:12.219763 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:12.219821 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:12.245063 1176706 cri.go:89] found id: ""
	I1217 00:59:12.245084 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.245091 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:12.245097 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:12.245165 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:12.272131 1176706 cri.go:89] found id: ""
	I1217 00:59:12.272145 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.272152 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:12.272157 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:12.272216 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:12.303997 1176706 cri.go:89] found id: ""
	I1217 00:59:12.304011 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.304018 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:12.304024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:12.304085 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:12.333611 1176706 cri.go:89] found id: ""
	I1217 00:59:12.333624 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.333632 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:12.333637 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:12.333693 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:12.363773 1176706 cri.go:89] found id: ""
	I1217 00:59:12.363789 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.363797 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:12.363802 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:12.363863 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:12.389846 1176706 cri.go:89] found id: ""
	I1217 00:59:12.389861 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.389868 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:12.389875 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:12.389886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:12.407604 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:12.407621 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:12.473182 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:12.473192 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:12.473203 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:12.543348 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:12.543369 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.577767 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:12.577783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.146065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:15.160131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:15.160197 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:15.190612 1176706 cri.go:89] found id: ""
	I1217 00:59:15.190626 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.190634 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:15.190639 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:15.190699 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:15.218099 1176706 cri.go:89] found id: ""
	I1217 00:59:15.218113 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.218121 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:15.218126 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:15.218184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:15.248835 1176706 cri.go:89] found id: ""
	I1217 00:59:15.248848 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.248856 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:15.248861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:15.248918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:15.285228 1176706 cri.go:89] found id: ""
	I1217 00:59:15.285242 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.285250 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:15.285256 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:15.285342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:15.319666 1176706 cri.go:89] found id: ""
	I1217 00:59:15.319684 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.319692 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:15.319697 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:15.319762 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:15.349950 1176706 cri.go:89] found id: ""
	I1217 00:59:15.349964 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.349971 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:15.349985 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:15.350057 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:15.377523 1176706 cri.go:89] found id: ""
	I1217 00:59:15.377539 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.377546 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:15.377553 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:15.377563 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.444971 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:15.444997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:15.463350 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:15.463367 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:15.527808 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:15.527819 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:15.527829 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:15.596798 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:15.596819 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.130677 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:18.141262 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:18.141323 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:18.169116 1176706 cri.go:89] found id: ""
	I1217 00:59:18.169130 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.169138 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:18.169144 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:18.169213 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:18.196282 1176706 cri.go:89] found id: ""
	I1217 00:59:18.196296 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.196303 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:18.196308 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:18.196374 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:18.221983 1176706 cri.go:89] found id: ""
	I1217 00:59:18.222001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.222008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:18.222014 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:18.222104 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:18.253664 1176706 cri.go:89] found id: ""
	I1217 00:59:18.253678 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.253695 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:18.253701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:18.253759 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:18.285902 1176706 cri.go:89] found id: ""
	I1217 00:59:18.285926 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.285935 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:18.285940 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:18.286012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:18.314726 1176706 cri.go:89] found id: ""
	I1217 00:59:18.314740 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.314747 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:18.314762 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:18.314817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:18.344853 1176706 cri.go:89] found id: ""
	I1217 00:59:18.344867 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.344875 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:18.344882 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:18.344904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:18.414538 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:18.414559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.447095 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:18.447111 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:18.512991 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:18.513011 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:18.533994 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:18.534020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:18.598850 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.100519 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:21.110642 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:21.110704 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:21.135662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.135677 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.135684 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:21.135690 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:21.135749 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:21.165495 1176706 cri.go:89] found id: ""
	I1217 00:59:21.165508 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.165515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:21.165522 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:21.165581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:21.190194 1176706 cri.go:89] found id: ""
	I1217 00:59:21.190216 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.190224 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:21.190229 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:21.190286 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:21.215635 1176706 cri.go:89] found id: ""
	I1217 00:59:21.215658 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.215668 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:21.215674 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:21.215741 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:21.240901 1176706 cri.go:89] found id: ""
	I1217 00:59:21.240915 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.240922 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:21.240928 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:21.240985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:21.282662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.282676 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.282683 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:21.282689 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:21.282747 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:21.318909 1176706 cri.go:89] found id: ""
	I1217 00:59:21.318937 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.318946 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:21.318955 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:21.318981 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:21.389438 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:21.389459 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:21.407933 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:21.407951 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:21.470948 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.470958 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:21.470970 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:21.543202 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:21.543223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:24.074213 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:24.084903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:24.084967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:24.111192 1176706 cri.go:89] found id: ""
	I1217 00:59:24.111207 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.111214 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:24.111221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:24.111280 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:24.137550 1176706 cri.go:89] found id: ""
	I1217 00:59:24.137564 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.137572 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:24.137577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:24.137638 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:24.163576 1176706 cri.go:89] found id: ""
	I1217 00:59:24.163590 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.163598 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:24.163603 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:24.163661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:24.191365 1176706 cri.go:89] found id: ""
	I1217 00:59:24.191379 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.191386 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:24.191391 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:24.191451 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:24.218021 1176706 cri.go:89] found id: ""
	I1217 00:59:24.218036 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.218043 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:24.218048 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:24.218109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:24.243066 1176706 cri.go:89] found id: ""
	I1217 00:59:24.243079 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.243086 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:24.243092 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:24.243150 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:24.273424 1176706 cri.go:89] found id: ""
	I1217 00:59:24.273438 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.273446 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:24.273453 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:24.273468 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:24.352524 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:24.352545 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:24.370425 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:24.370445 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:24.435871 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:24.435881 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:24.435896 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:24.504929 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:24.504949 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.033266 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:27.043460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:27.043521 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:27.068665 1176706 cri.go:89] found id: ""
	I1217 00:59:27.068679 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.068686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:27.068698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:27.068754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:27.094007 1176706 cri.go:89] found id: ""
	I1217 00:59:27.094021 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.094028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:27.094033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:27.094092 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:27.118910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.118923 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.118931 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:27.118936 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:27.118994 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:27.147303 1176706 cri.go:89] found id: ""
	I1217 00:59:27.147317 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.147324 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:27.147330 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:27.147386 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:27.172343 1176706 cri.go:89] found id: ""
	I1217 00:59:27.172357 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.172365 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:27.172370 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:27.172458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:27.197910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.197924 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.197932 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:27.197938 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:27.198001 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:27.227576 1176706 cri.go:89] found id: ""
	I1217 00:59:27.227591 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.227598 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:27.227606 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:27.227618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:27.311005 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:27.311016 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:27.311026 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:27.382732 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:27.382752 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.415820 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:27.415836 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:27.482903 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:27.482926 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.004621 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:30.030664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:30.030745 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:30.081469 1176706 cri.go:89] found id: ""
	I1217 00:59:30.081485 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.081493 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:30.081499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:30.081566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:30.113916 1176706 cri.go:89] found id: ""
	I1217 00:59:30.113931 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.113939 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:30.113946 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:30.114011 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:30.145424 1176706 cri.go:89] found id: ""
	I1217 00:59:30.145439 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.145447 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:30.145453 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:30.145519 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:30.172979 1176706 cri.go:89] found id: ""
	I1217 00:59:30.172993 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.173000 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:30.173006 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:30.173068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:30.203666 1176706 cri.go:89] found id: ""
	I1217 00:59:30.203680 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.203688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:30.203693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:30.203754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:30.230252 1176706 cri.go:89] found id: ""
	I1217 00:59:30.230266 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.230274 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:30.230280 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:30.230346 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:30.263265 1176706 cri.go:89] found id: ""
	I1217 00:59:30.263288 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.263297 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:30.263305 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:30.263317 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.285817 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:30.285833 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:30.357587 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:30.357597 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:30.357609 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:30.426496 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:30.426518 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:30.455371 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:30.455387 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:33.025588 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:33.037063 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:33.037133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:33.066495 1176706 cri.go:89] found id: ""
	I1217 00:59:33.066510 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.066518 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:33.066531 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:33.066593 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:33.094203 1176706 cri.go:89] found id: ""
	I1217 00:59:33.094218 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.094225 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:33.094230 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:33.094289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:33.121048 1176706 cri.go:89] found id: ""
	I1217 00:59:33.121062 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.121070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:33.121076 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:33.121137 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:33.148530 1176706 cri.go:89] found id: ""
	I1217 00:59:33.148559 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.148568 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:33.148574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:33.148647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:33.175802 1176706 cri.go:89] found id: ""
	I1217 00:59:33.175816 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.175823 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:33.175829 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:33.175892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:33.206535 1176706 cri.go:89] found id: ""
	I1217 00:59:33.206548 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.206556 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:33.206562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:33.206623 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:33.236039 1176706 cri.go:89] found id: ""
	I1217 00:59:33.236052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.236060 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:33.236068 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:33.236078 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:33.255180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:33.255197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:33.339098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:33.339108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:33.339121 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:33.412971 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:33.412997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:33.441676 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:33.441694 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.008647 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:36.020237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:36.020301 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:36.052600 1176706 cri.go:89] found id: ""
	I1217 00:59:36.052616 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.052623 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:36.052629 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:36.052692 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:36.081744 1176706 cri.go:89] found id: ""
	I1217 00:59:36.081759 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.081768 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:36.081773 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:36.081841 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:36.109987 1176706 cri.go:89] found id: ""
	I1217 00:59:36.110001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.110008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:36.110013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:36.110077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:36.134954 1176706 cri.go:89] found id: ""
	I1217 00:59:36.134967 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.134975 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:36.134980 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:36.135037 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:36.159862 1176706 cri.go:89] found id: ""
	I1217 00:59:36.159876 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.159884 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:36.159889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:36.159947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:36.187809 1176706 cri.go:89] found id: ""
	I1217 00:59:36.187822 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.187829 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:36.187835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:36.187904 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:36.214242 1176706 cri.go:89] found id: ""
	I1217 00:59:36.214257 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.214264 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:36.214272 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:36.214283 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.286225 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:36.286244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:36.305628 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:36.305646 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:36.371158 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:36.371170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:36.371181 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:36.439045 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:36.439065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:38.969106 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:38.979363 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:38.979424 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:39.007795 1176706 cri.go:89] found id: ""
	I1217 00:59:39.007810 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.007818 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:39.007824 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:39.007888 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:39.034152 1176706 cri.go:89] found id: ""
	I1217 00:59:39.034166 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.034173 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:39.034179 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:39.034238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:39.059914 1176706 cri.go:89] found id: ""
	I1217 00:59:39.059928 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.059935 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:39.059941 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:39.060002 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:39.085320 1176706 cri.go:89] found id: ""
	I1217 00:59:39.085334 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.085341 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:39.085349 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:39.085405 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:39.110285 1176706 cri.go:89] found id: ""
	I1217 00:59:39.110298 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.110306 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:39.110311 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:39.110372 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:39.135035 1176706 cri.go:89] found id: ""
	I1217 00:59:39.135058 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.135066 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:39.135072 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:39.135139 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:39.159817 1176706 cri.go:89] found id: ""
	I1217 00:59:39.159830 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.159848 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:39.159857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:39.159872 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:39.177791 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:39.177809 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:39.249533 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:39.249543 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:39.249552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:39.325557 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:39.325577 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:39.360066 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:39.360085 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:41.928404 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:41.938632 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:41.938696 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:41.964031 1176706 cri.go:89] found id: ""
	I1217 00:59:41.964052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.964059 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:41.964064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:41.964122 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:41.993062 1176706 cri.go:89] found id: ""
	I1217 00:59:41.993076 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.993084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:41.993089 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:41.993160 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:42.033652 1176706 cri.go:89] found id: ""
	I1217 00:59:42.033667 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.033676 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:42.033681 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:42.033746 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:42.060629 1176706 cri.go:89] found id: ""
	I1217 00:59:42.060645 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.060653 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:42.060659 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:42.060722 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:42.092817 1176706 cri.go:89] found id: ""
	I1217 00:59:42.092845 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.092853 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:42.092868 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:42.092941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:42.136486 1176706 cri.go:89] found id: ""
	I1217 00:59:42.136506 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.136515 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:42.136521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:42.136592 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:42.171937 1176706 cri.go:89] found id: ""
	I1217 00:59:42.171952 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.171959 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:42.171967 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:42.171979 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:42.262695 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:42.262707 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:42.262718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:42.339199 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:42.339220 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:42.372997 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:42.373025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:42.446036 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:42.446055 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:44.965013 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:44.976094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:44.976161 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:45.015164 1176706 cri.go:89] found id: ""
	I1217 00:59:45.015181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.015189 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:45.015195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:45.015272 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:45.071613 1176706 cri.go:89] found id: ""
	I1217 00:59:45.071635 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.071643 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:45.071649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:45.071715 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:45.119793 1176706 cri.go:89] found id: ""
	I1217 00:59:45.119818 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.119826 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:45.119839 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:45.119914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:45.151783 1176706 cri.go:89] found id: ""
	I1217 00:59:45.151800 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.151808 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:45.151814 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:45.151892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:45.215691 1176706 cri.go:89] found id: ""
	I1217 00:59:45.215708 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.215717 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:45.215723 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:45.215788 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:45.307588 1176706 cri.go:89] found id: ""
	I1217 00:59:45.307603 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.307612 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:45.307617 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:45.307686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:45.338241 1176706 cri.go:89] found id: ""
	I1217 00:59:45.338255 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.338262 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:45.338270 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:45.338281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:45.369988 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:45.370005 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:45.441693 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:45.441715 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:45.461548 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:45.461567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:45.548353 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:45.548363 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:45.548374 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.120029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:48.130460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:48.130527 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:48.158049 1176706 cri.go:89] found id: ""
	I1217 00:59:48.158063 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.158070 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:48.158075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:48.158133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:48.183768 1176706 cri.go:89] found id: ""
	I1217 00:59:48.183782 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.183790 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:48.183795 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:48.183853 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:48.209858 1176706 cri.go:89] found id: ""
	I1217 00:59:48.209883 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.209891 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:48.209897 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:48.209969 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:48.239433 1176706 cri.go:89] found id: ""
	I1217 00:59:48.239447 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.239464 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:48.239470 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:48.239546 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:48.283289 1176706 cri.go:89] found id: ""
	I1217 00:59:48.283312 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.283320 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:48.283325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:48.283401 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:48.314402 1176706 cri.go:89] found id: ""
	I1217 00:59:48.314429 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.314437 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:48.314443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:48.314511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:48.343692 1176706 cri.go:89] found id: ""
	I1217 00:59:48.343706 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.343727 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:48.343735 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:48.343745 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:48.362542 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:48.362560 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:48.427994 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:48.428004 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:48.428016 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.499539 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:48.499559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:48.531009 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:48.531025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.098220 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:51.109265 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:51.109331 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:51.136197 1176706 cri.go:89] found id: ""
	I1217 00:59:51.136213 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.136221 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:51.136227 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:51.136287 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:51.163078 1176706 cri.go:89] found id: ""
	I1217 00:59:51.163092 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.163100 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:51.163105 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:51.163172 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:51.191839 1176706 cri.go:89] found id: ""
	I1217 00:59:51.191853 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.191861 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:51.191866 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:51.191949 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:51.218098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.218116 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.218124 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:51.218130 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:51.218211 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:51.243098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.243112 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.243120 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:51.243125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:51.243191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:51.271566 1176706 cri.go:89] found id: ""
	I1217 00:59:51.271579 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.271586 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:51.271591 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:51.271647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:51.305158 1176706 cri.go:89] found id: ""
	I1217 00:59:51.305181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.305187 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:51.305196 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:51.305207 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.376352 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:51.376373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:51.394410 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:51.394427 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:51.459231 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:51.459240 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:51.459251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:51.528231 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:51.528252 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:54.058312 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:54.069031 1176706 kubeadm.go:602] duration metric: took 4m2.785263609s to restartPrimaryControlPlane
	W1217 00:59:54.069095 1176706 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:59:54.069181 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 00:59:54.486154 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:59:54.499356 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:59:54.507725 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:59:54.507779 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:59:54.515997 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:59:54.516007 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 00:59:54.516064 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:59:54.524157 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:59:54.524213 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:59:54.532265 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:59:54.540638 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:59:54.540707 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:59:54.548269 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.556326 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:59:54.556388 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.564545 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:59:54.572682 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:59:54.572738 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:59:54.580611 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:59:54.700281 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:59:54.700747 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:59:54.763643 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:03:56.152758 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:03:56.152795 1176706 kubeadm.go:319] 
	I1217 01:03:56.152869 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:03:56.156728 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.156797 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.156958 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.157014 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.157073 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.157118 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.157197 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.157253 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.157300 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.157352 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.157400 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.157453 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.157508 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.157553 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.157624 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.157727 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.157824 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.157884 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.160971 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.161055 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.161118 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.161193 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.161252 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.161327 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.161379 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.161441 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.161501 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.161574 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.161645 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.161681 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.161741 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:56.161790 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:56.161845 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:56.161896 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:56.161957 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:56.162010 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:56.162092 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:56.162157 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:56.165021 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:56.165147 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:56.165231 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:56.165300 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:56.165418 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:56.165512 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:56.165614 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:56.165696 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:56.165733 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:56.165861 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:56.165963 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:03:56.166026 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240228s
	I1217 01:03:56.166028 1176706 kubeadm.go:319] 
	I1217 01:03:56.166083 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:03:56.166114 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:03:56.166215 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:03:56.166218 1176706 kubeadm.go:319] 
	I1217 01:03:56.166320 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:03:56.166351 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:03:56.166380 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:03:56.166487 1176706 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240228s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:03:56.166580 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 01:03:56.166903 1176706 kubeadm.go:319] 
	I1217 01:03:56.586040 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:03:56.599481 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:03:56.599536 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:03:56.607687 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:03:56.607697 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 01:03:56.607750 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:03:56.615588 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:03:56.615644 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:03:56.623820 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:03:56.631817 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:03:56.631875 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:03:56.639771 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.647723 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:03:56.647784 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.655274 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:03:56.662953 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:03:56.663009 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:03:56.671031 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:03:56.709331 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.709382 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.784528 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.784593 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.784627 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.784671 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.784718 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.784764 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.784811 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.784857 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.784907 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.784950 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.784997 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.785046 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.852730 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.852846 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.852941 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.864882 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.870169 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.870260 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.870331 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.870414 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.870480 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.870560 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.870623 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.870698 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.870772 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.870857 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.870939 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.870985 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.871053 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:57.081118 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:57.308024 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:57.795688 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:58.747783 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:59.056308 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:59.056908 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:59.061460 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:59.064667 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:59.064766 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:59.064843 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:59.064909 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:59.079437 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:59.079539 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:59.087425 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:59.087990 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:59.088228 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:59.232706 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:59.232823 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:07:59.232882 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288911s
	I1217 01:07:59.232905 1176706 kubeadm.go:319] 
	I1217 01:07:59.232961 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:07:59.232994 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:07:59.233119 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:07:59.233124 1176706 kubeadm.go:319] 
	I1217 01:07:59.233227 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:07:59.233261 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:07:59.233291 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:07:59.233294 1176706 kubeadm.go:319] 
	I1217 01:07:59.237945 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:07:59.238359 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:07:59.238466 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:07:59.238699 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:07:59.238704 1176706 kubeadm.go:319] 
	I1217 01:07:59.238771 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:07:59.238833 1176706 kubeadm.go:403] duration metric: took 12m7.995613678s to StartCluster
	I1217 01:07:59.238862 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:07:59.238924 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:07:59.265092 1176706 cri.go:89] found id: ""
	I1217 01:07:59.265110 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.265118 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:07:59.265124 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:07:59.265190 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:07:59.289869 1176706 cri.go:89] found id: ""
	I1217 01:07:59.289884 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.289891 1176706 logs.go:284] No container was found matching "etcd"
	I1217 01:07:59.289896 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:07:59.289954 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:07:59.315177 1176706 cri.go:89] found id: ""
	I1217 01:07:59.315192 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.315200 1176706 logs.go:284] No container was found matching "coredns"
	I1217 01:07:59.315206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:07:59.315267 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:07:59.343402 1176706 cri.go:89] found id: ""
	I1217 01:07:59.343422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.343429 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:07:59.343435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:07:59.343492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:07:59.369351 1176706 cri.go:89] found id: ""
	I1217 01:07:59.369367 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.369375 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:07:59.369381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:07:59.369446 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:07:59.395407 1176706 cri.go:89] found id: ""
	I1217 01:07:59.395422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.395430 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:07:59.395436 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:07:59.395497 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:07:59.425527 1176706 cri.go:89] found id: ""
	I1217 01:07:59.425542 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.425549 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 01:07:59.425557 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:07:59.425567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:07:59.496396 1176706 logs.go:123] Gathering logs for container status ...
	I1217 01:07:59.496422 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:07:59.529365 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 01:07:59.529381 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:07:59.607059 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 01:07:59.607079 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:07:59.625460 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:07:59.625476 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:07:59.694111 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 01:07:59.694128 1176706 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:07:59.694160 1176706 out.go:285] * 
	W1217 01:07:59.696578 1176706 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.696718 1176706 out.go:285] * 
	W1217 01:07:59.699147 1176706 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:07:59.705064 1176706 out.go:203] 
	W1217 01:07:59.708024 1176706 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.708074 1176706 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:07:59.708093 1176706 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:07:59.711386 1176706 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.077706792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.077955098Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078065315Z" level=info msg="Create NRI interface"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078221668Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.07823903Z" level=info msg="runtime interface created"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078253274Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078274861Z" level=info msg="runtime interface starting up..."
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078281187Z" level=info msg="starting plugins..."
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078309913Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 00:55:50 functional-389537 crio[10035]: time="2025-12-17T00:55:50.078394662Z" level=info msg="No systemd watchdog enabled"
	Dec 17 00:55:50 functional-389537 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.769515298Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=7738c2ec-23fd-41c2-bf87-2793f023edcc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.770662432Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=dfe8a792-5dcc-4fb8-9e7c-61d12e13480c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.771173887Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5e41de14-6ab5-4bd0-8b1f-d1aaa926d052 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.771613762Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ef9a5b6b-ddfd-4451-b4b5-65c9f96efdc9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.77201523Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f7106076-8cd9-43cb-b7d6-b0df492103a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.772558315Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a5725d73-e041-4ebd-99d9-bf135606222b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 00:59:54 functional-389537 crio[10035]: time="2025-12-17T00:59:54.772975922Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1c1829d4-725f-476c-b5d6-fe07b75b9254 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.856470274Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ef116d89-326a-4264-be1a-c1a1c61f856f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.85716241Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=48ae23b1-9237-4abe-8586-a22789c1855d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.857752633Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=3cdbc308-65b6-45fa-9f9e-f10e79119ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858320825Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3d72515c-27e8-4599-9a3a-55c1e786e2d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858852571Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df55df6f-24f3-440d-9630-435b19250644 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859434761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=76977bf3-dbf1-4740-ab7e-261b44d6cbc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859913322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3a88b64b-7c2e-4efa-a683-a7222714b1da name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:08:03.565830   21497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:03.566401   21497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:03.568011   21497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:03.568688   21497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:03.570354   21497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:08:03 up  6:50,  0 user,  load average: 0.53, 0.28, 0.47
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:08:01 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:01 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2123.
	Dec 17 01:08:01 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:02 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:02 functional-389537 kubelet[21377]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:02 functional-389537 kubelet[21377]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:02 functional-389537 kubelet[21377]: E1217 01:08:02.083264   21377 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:02 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:02 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:02 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2124.
	Dec 17 01:08:02 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:02 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:02 functional-389537 kubelet[21413]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:02 functional-389537 kubelet[21413]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:02 functional-389537 kubelet[21413]: E1217 01:08:02.820311   21413 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:02 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:02 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:03 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2125.
	Dec 17 01:08:03 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:03 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:03 functional-389537 kubelet[21496]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:03 functional-389537 kubelet[21496]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:03 functional-389537 kubelet[21496]: E1217 01:08:03.557747   21496 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:03 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:03 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (403.332511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-389537 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-389537 apply -f testdata/invalidsvc.yaml: exit status 1 (73.966136ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-389537 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389537 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389537 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389537 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389537 --alsologtostderr -v=1] stderr:
I1217 01:10:18.261113 1195652 out.go:360] Setting OutFile to fd 1 ...
I1217 01:10:18.261248 1195652 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:18.261268 1195652 out.go:374] Setting ErrFile to fd 2...
I1217 01:10:18.261285 1195652 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:18.261681 1195652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:10:18.262007 1195652 mustload.go:66] Loading cluster: functional-389537
I1217 01:10:18.262864 1195652 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:18.263355 1195652 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:10:18.280009 1195652 host.go:66] Checking if "functional-389537" exists ...
I1217 01:10:18.280336 1195652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 01:10:18.336835 1195652 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:10:18.327346043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 01:10:18.336953 1195652 api_server.go:166] Checking apiserver status ...
I1217 01:10:18.337020 1195652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 01:10:18.337066 1195652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:10:18.353945 1195652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
W1217 01:10:18.449830 1195652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 01:10:18.453049 1195652 out.go:179] * The control-plane node functional-389537 apiserver is not running: (state=Stopped)
I1217 01:10:18.456058 1195652 out.go:179]   To start a cluster, run: "minikube start -p functional-389537"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (311.211597ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-389537 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount     │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2981185060/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh       │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh       │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh       │ functional-389537 ssh -- ls -la /mount-9p                                                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh       │ functional-389537 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount     │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount2 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh       │ functional-389537 ssh findmnt -T /mount1                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount     │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount1 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount     │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount3 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh       │ functional-389537 ssh findmnt -T /mount1                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh       │ functional-389537 ssh findmnt -T /mount2                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh       │ functional-389537 ssh findmnt -T /mount3                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount     │ -p functional-389537 --kill=true                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ addons    │ functional-389537 addons list                                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ addons    │ functional-389537 addons list -o json                                                                                                               │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ service   │ functional-389537 service list                                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service   │ functional-389537 service list -o json                                                                                                              │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service   │ functional-389537 service --namespace=default --https --url hello-node                                                                              │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service   │ functional-389537 service hello-node --url --format={{.IP}}                                                                                         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service   │ functional-389537 service hello-node --url                                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start     │ -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start     │ -p functional-389537 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start     │ -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-389537 --alsologtostderr -v=1                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:10:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:10:18.049509 1195605 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:10:18.049733 1195605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:18.049764 1195605 out.go:374] Setting ErrFile to fd 2...
	I1217 01:10:18.049784 1195605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:18.050297 1195605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:10:18.050762 1195605 out.go:368] Setting JSON to false
	I1217 01:10:18.051720 1195605 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24768,"bootTime":1765909050,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:10:18.051822 1195605 start.go:143] virtualization:  
	I1217 01:10:18.056956 1195605 out.go:179] * [functional-389537] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 01:10:18.059947 1195605 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:10:18.060044 1195605 notify.go:221] Checking for updates...
	I1217 01:10:18.065731 1195605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:10:18.068771 1195605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:10:18.071703 1195605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:10:18.074678 1195605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:10:18.077595 1195605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:10:18.081023 1195605 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 01:10:18.081621 1195605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:10:18.120581 1195605 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:10:18.120797 1195605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:10:18.188706 1195605 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:10:18.178636988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:10:18.188812 1195605 docker.go:319] overlay module found
	I1217 01:10:18.191969 1195605 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 01:10:18.194813 1195605 start.go:309] selected driver: docker
	I1217 01:10:18.194851 1195605 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:10:18.194963 1195605 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:10:18.198653 1195605 out.go:203] 
	W1217 01:10:18.201645 1195605 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 01:10:18.204565 1195605 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.856470274Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ef116d89-326a-4264-be1a-c1a1c61f856f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.85716241Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=48ae23b1-9237-4abe-8586-a22789c1855d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.857752633Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=3cdbc308-65b6-45fa-9f9e-f10e79119ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858320825Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3d72515c-27e8-4599-9a3a-55c1e786e2d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858852571Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df55df6f-24f3-440d-9630-435b19250644 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859434761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=76977bf3-dbf1-4740-ab7e-261b44d6cbc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859913322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3a88b64b-7c2e-4efa-a683-a7222714b1da name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682372585Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68256814Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68261275Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682675452Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711610422Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711759996Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711798871Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739279084Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739445118Z" level=info msg="Image localhost/kicbase/echo-server:functional-389537 not found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739495176Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-389537 found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732782966Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732960388Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733031024Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733098123Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765602567Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765759674Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765805293Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.806741123Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=54276271-8e2f-42ec-a439-ea95344609a5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:10:19.506500   24184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:19.507378   24184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:19.509080   24184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:19.509396   24184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:19.510885   24184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:10:19 up  6:52,  0 user,  load average: 0.74, 0.43, 0.50
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:10:17 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:17 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2304.
	Dec 17 01:10:17 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:17 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:17 functional-389537 kubelet[24049]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:17 functional-389537 kubelet[24049]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:17 functional-389537 kubelet[24049]: E1217 01:10:17.839330   24049 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:17 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:17 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:18 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2305.
	Dec 17 01:10:18 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:18 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:18 functional-389537 kubelet[24078]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:18 functional-389537 kubelet[24078]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:18 functional-389537 kubelet[24078]: E1217 01:10:18.548009   24078 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:18 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:18 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:19 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2306.
	Dec 17 01:10:19 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:19 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:19 functional-389537 kubelet[24137]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:19 functional-389537 kubelet[24137]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:19 functional-389537 kubelet[24137]: E1217 01:10:19.318958   24137 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:19 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:19 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (307.492769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 status: exit status 2 (300.478465ms)

                                                
                                                
-- stdout --
	functional-389537
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-389537 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (342.303636ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-389537 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 status -o json: exit status 2 (298.748279ms)

                                                
                                                
-- stdout --
	{"Name":"functional-389537","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-389537 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (306.70089ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-389537 ssh cat /mount-9p/test-1765933699495920901                                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2981185060/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh -- ls -la /mount-9p                                                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount2 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount1                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount1 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount3 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount1                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh findmnt -T /mount2                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh findmnt -T /mount3                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount   │ -p functional-389537 --kill=true                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ addons  │ functional-389537 addons list                                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ addons  │ functional-389537 addons list -o json                                                                                                               │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ service │ functional-389537 service list                                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service │ functional-389537 service list -o json                                                                                                              │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service │ functional-389537 service --namespace=default --https --url hello-node                                                                              │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service │ functional-389537 service hello-node --url --format={{.IP}}                                                                                         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service │ functional-389537 service hello-node --url                                                                                                          │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start   │ -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start   │ -p functional-389537 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:10:15
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:10:15.433726 1195016 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:10:15.433915 1195016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:15.433947 1195016 out.go:374] Setting ErrFile to fd 2...
	I1217 01:10:15.433968 1195016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:15.434281 1195016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:10:15.434702 1195016 out.go:368] Setting JSON to false
	I1217 01:10:15.435613 1195016 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24766,"bootTime":1765909050,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:10:15.435722 1195016 start.go:143] virtualization:  
	I1217 01:10:15.439000 1195016 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:10:15.442900 1195016 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:10:15.442993 1195016 notify.go:221] Checking for updates...
	I1217 01:10:15.448834 1195016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:10:15.451753 1195016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:10:15.454602 1195016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:10:15.457390 1195016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:10:15.460338 1195016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:10:15.464130 1195016 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 01:10:15.464834 1195016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:10:15.497697 1195016 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:10:15.497821 1195016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:10:15.586552 1195016 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:10:15.576663635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:10:15.586662 1195016 docker.go:319] overlay module found
	I1217 01:10:15.589907 1195016 out.go:179] * Using the docker driver based on existing profile
	I1217 01:10:15.592835 1195016 start.go:309] selected driver: docker
	I1217 01:10:15.592872 1195016 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:10:15.592962 1195016 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:10:15.593062 1195016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:10:15.648390 1195016 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:10:15.638942065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:10:15.648851 1195016 cni.go:84] Creating CNI manager for ""
	I1217 01:10:15.648912 1195016 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 01:10:15.648949 1195016 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:10:15.653804 1195016 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.856470274Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ef116d89-326a-4264-be1a-c1a1c61f856f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.85716241Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=48ae23b1-9237-4abe-8586-a22789c1855d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.857752633Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=3cdbc308-65b6-45fa-9f9e-f10e79119ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858320825Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3d72515c-27e8-4599-9a3a-55c1e786e2d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858852571Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df55df6f-24f3-440d-9630-435b19250644 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859434761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=76977bf3-dbf1-4740-ab7e-261b44d6cbc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859913322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3a88b64b-7c2e-4efa-a683-a7222714b1da name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682372585Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68256814Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68261275Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682675452Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711610422Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711759996Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711798871Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739279084Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739445118Z" level=info msg="Image localhost/kicbase/echo-server:functional-389537 not found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739495176Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-389537 found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732782966Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732960388Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733031024Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733098123Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765602567Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765759674Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765805293Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.806741123Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=54276271-8e2f-42ec-a439-ea95344609a5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:10:17.557479   24035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:17.558715   24035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:17.559239   24035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:17.560973   24035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:17.561599   24035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:10:17 up  6:52,  0 user,  load average: 0.74, 0.43, 0.50
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:10:14 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:15 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2301.
	Dec 17 01:10:15 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:15 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:15 functional-389537 kubelet[23879]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:15 functional-389537 kubelet[23879]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:15 functional-389537 kubelet[23879]: E1217 01:10:15.563951   23879 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:15 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:15 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:16 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2302.
	Dec 17 01:10:16 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:16 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:16 functional-389537 kubelet[23913]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:16 functional-389537 kubelet[23913]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:16 functional-389537 kubelet[23913]: E1217 01:10:16.266272   23913 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:16 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:16 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:16 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2303.
	Dec 17 01:10:16 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:17 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:17 functional-389537 kubelet[23951]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:17 functional-389537 kubelet[23951]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:17 functional-389537 kubelet[23951]: E1217 01:10:17.080039   23951 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:17 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:17 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (341.322017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-389537 create deployment hello-node-connect --image kicbase/echo-server
E1217 01:10:07.912268 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-389537 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (55.829591ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-389537 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-389537 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-389537 describe po hello-node-connect: exit status 1 (57.758354ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-389537 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-389537 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-389537 logs -l app=hello-node-connect: exit status 1 (84.543416ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-389537 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-389537 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-389537 describe svc hello-node-connect: exit status 1 (67.888393ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-389537 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (299.836797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-389537 ssh -n functional-389537 sudo cat /tmp/does/not/exist/cp-test.txt                                                                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh echo hello                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh cat /etc/hostname                                                                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001:/mount-9p --alsologtostderr -v=1              │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh -- ls -la /mount-9p                                                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh cat /mount-9p/test-1765933699495920901                                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2981185060/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh -- ls -la /mount-9p                                                                                                           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh sudo umount -f /mount-9p                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount2 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount1                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount1 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ mount   │ -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount3 --alsologtostderr -v=1                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh findmnt -T /mount1                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh findmnt -T /mount2                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh     │ functional-389537 ssh findmnt -T /mount3                                                                                                            │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount   │ -p functional-389537 --kill=true                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ addons  │ functional-389537 addons list                                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ addons  │ functional-389537 addons list -o json                                                                                                               │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:55:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:55:46.994785 1176706 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:55:46.994905 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.994909 1176706 out.go:374] Setting ErrFile to fd 2...
	I1217 00:55:46.994912 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.995145 1176706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:55:46.995485 1176706 out.go:368] Setting JSON to false
	I1217 00:55:46.996300 1176706 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23897,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:55:46.996353 1176706 start.go:143] virtualization:  
	I1217 00:55:46.999868 1176706 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:55:47.003126 1176706 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:55:47.003469 1176706 notify.go:221] Checking for updates...
	I1217 00:55:47.009985 1176706 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:55:47.012797 1176706 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:55:47.015597 1176706 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:55:47.018366 1176706 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:55:47.021294 1176706 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:55:47.024608 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:47.024710 1176706 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:55:47.058976 1176706 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:55:47.059096 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.117622 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.107831529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.117708 1176706 docker.go:319] overlay module found
	I1217 00:55:47.120741 1176706 out.go:179] * Using the docker driver based on existing profile
	I1217 00:55:47.123563 1176706 start.go:309] selected driver: docker
	I1217 00:55:47.123570 1176706 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.123673 1176706 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:55:47.123773 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.174997 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.166206706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.175382 1176706 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:55:47.175411 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:47.175464 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:47.175503 1176706 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.182544 1176706 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:55:47.185443 1176706 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:55:47.188263 1176706 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:55:47.191087 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:47.191140 1176706 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:55:47.191147 1176706 cache.go:65] Caching tarball of preloaded images
	I1217 00:55:47.191162 1176706 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:55:47.191229 1176706 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:55:47.191238 1176706 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:55:47.191343 1176706 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:55:47.210444 1176706 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:55:47.210456 1176706 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:55:47.210476 1176706 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:55:47.210509 1176706 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:55:47.210571 1176706 start.go:364] duration metric: took 45.496µs to acquireMachinesLock for "functional-389537"
	I1217 00:55:47.210589 1176706 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:55:47.210598 1176706 fix.go:54] fixHost starting: 
	I1217 00:55:47.210865 1176706 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:55:47.227344 1176706 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:55:47.227372 1176706 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:55:47.230529 1176706 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:55:47.230551 1176706 machine.go:94] provisionDockerMachine start ...
	I1217 00:55:47.230646 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.247199 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.247509 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.247515 1176706 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:55:47.376058 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.376078 1176706 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:55:47.376140 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.394017 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.394338 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.394346 1176706 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:55:47.541042 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.541113 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.567770 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.568067 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.568081 1176706 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:55:47.696783 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:55:47.696798 1176706 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:55:47.696826 1176706 ubuntu.go:190] setting up certificates
	I1217 00:55:47.696844 1176706 provision.go:84] configureAuth start
	I1217 00:55:47.696911 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:47.715433 1176706 provision.go:143] copyHostCerts
	I1217 00:55:47.715503 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:55:47.715510 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:55:47.715589 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:55:47.715698 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:55:47.715703 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:55:47.715729 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:55:47.715793 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:55:47.715796 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:55:47.715819 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:55:47.715916 1176706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:55:47.936144 1176706 provision.go:177] copyRemoteCerts
	I1217 00:55:47.936198 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:55:47.936245 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.956022 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.053167 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:55:48.072266 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:55:48.091659 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:55:48.111240 1176706 provision.go:87] duration metric: took 414.372164ms to configureAuth
	I1217 00:55:48.111259 1176706 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:55:48.111463 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:48.111573 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.130165 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:48.130471 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:48.130482 1176706 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:55:48.471522 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:55:48.471533 1176706 machine.go:97] duration metric: took 1.240975938s to provisionDockerMachine
	I1217 00:55:48.471544 1176706 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:55:48.471555 1176706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:55:48.471613 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:55:48.471661 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.490121 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.584735 1176706 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:55:48.588097 1176706 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:55:48.588115 1176706 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:55:48.588125 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:55:48.588181 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:55:48.588263 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:55:48.588334 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:55:48.588376 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:55:48.596032 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:48.613682 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:55:48.631217 1176706 start.go:296] duration metric: took 159.660022ms for postStartSetup
	I1217 00:55:48.631287 1176706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:55:48.631323 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.648559 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.741603 1176706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:55:48.746366 1176706 fix.go:56] duration metric: took 1.535755013s for fixHost
	I1217 00:55:48.746384 1176706 start.go:83] releasing machines lock for "functional-389537", held for 1.535804694s
	I1217 00:55:48.746455 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:48.763224 1176706 ssh_runner.go:195] Run: cat /version.json
	I1217 00:55:48.763430 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.763750 1176706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:55:48.763808 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.786426 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.786940 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.880624 1176706 ssh_runner.go:195] Run: systemctl --version
	I1217 00:55:48.974663 1176706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:55:49.027409 1176706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:55:49.032432 1176706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:55:49.032491 1176706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:55:49.041183 1176706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:55:49.041196 1176706 start.go:496] detecting cgroup driver to use...
	I1217 00:55:49.041228 1176706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:55:49.041278 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:55:49.058264 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:55:49.077295 1176706 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:55:49.077360 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:55:49.093971 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:55:49.107900 1176706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:55:49.227935 1176706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:55:49.348723 1176706 docker.go:234] disabling docker service ...
	I1217 00:55:49.348791 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:55:49.364370 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:55:49.377769 1176706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:55:49.508111 1176706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:55:49.633558 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:55:49.646587 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:55:49.660861 1176706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:55:49.660916 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.670006 1176706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:55:49.670064 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.678812 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.687975 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.697006 1176706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:55:49.705500 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.714719 1176706 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.723320 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.732206 1176706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:55:49.740020 1176706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:55:49.747555 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:49.895105 1176706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:55:50.085156 1176706 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:55:50.085220 1176706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:55:50.089378 1176706 start.go:564] Will wait 60s for crictl version
	I1217 00:55:50.089440 1176706 ssh_runner.go:195] Run: which crictl
	I1217 00:55:50.093400 1176706 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:55:50.123005 1176706 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:55:50.123090 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.155928 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.190668 1176706 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:55:50.193712 1176706 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:55:50.210245 1176706 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:55:50.217339 1176706 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:55:50.220306 1176706 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:55:50.220479 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:50.220549 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.261117 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.261129 1176706 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:55:50.261188 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.288200 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.288211 1176706 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:55:50.288217 1176706 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:55:50.288323 1176706 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:55:50.288468 1176706 ssh_runner.go:195] Run: crio config
	I1217 00:55:50.348160 1176706 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:55:50.348190 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:50.348199 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:50.348212 1176706 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:55:50.348234 1176706 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:55:50.348361 1176706 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:55:50.348453 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:55:50.356478 1176706 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:55:50.356555 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:55:50.364296 1176706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:55:50.378459 1176706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:55:50.391769 1176706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1217 00:55:50.404843 1176706 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:55:50.408803 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:50.530281 1176706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:55:50.553453 1176706 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:55:50.553463 1176706 certs.go:195] generating shared ca certs ...
	I1217 00:55:50.553477 1176706 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:55:50.553609 1176706 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:55:50.553660 1176706 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:55:50.553666 1176706 certs.go:257] generating profile certs ...
	I1217 00:55:50.553779 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:55:50.553831 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:55:50.553877 1176706 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:55:50.553979 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:55:50.554006 1176706 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:55:50.554013 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:55:50.554039 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:55:50.554060 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:55:50.554085 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:55:50.554129 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:50.555361 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:55:50.582492 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:55:50.603683 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:55:50.621384 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:55:50.639056 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:55:50.656396 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:55:50.673796 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:55:50.690805 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:55:50.708128 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:55:50.726044 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:55:50.743273 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:55:50.763262 1176706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:55:50.777113 1176706 ssh_runner.go:195] Run: openssl version
	I1217 00:55:50.783340 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.791319 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:55:50.799039 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802914 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802970 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.844145 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:55:50.851746 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.859382 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:55:50.866837 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870628 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870686 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.912088 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:55:50.919506 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.926804 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:55:50.934239 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938447 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938514 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.979317 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:55:50.986668 1176706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:55:50.990400 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:55:51.033890 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:55:51.074982 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:55:51.116748 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:55:51.160579 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:55:51.202188 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:55:51.243239 1176706 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:51.243328 1176706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:55:51.243394 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.274971 1176706 cri.go:89] found id: ""
	I1217 00:55:51.275034 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:55:51.283750 1176706 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:55:51.283758 1176706 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:55:51.283810 1176706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:55:51.291948 1176706 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.292487 1176706 kubeconfig.go:125] found "functional-389537" server: "https://192.168.49.2:8441"
	I1217 00:55:51.293778 1176706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:55:51.304922 1176706 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:41:14.220606710 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:55:50.397867980 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:55:51.304944 1176706 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:55:51.304956 1176706 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 00:55:51.305024 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.335594 1176706 cri.go:89] found id: ""
	I1217 00:55:51.335654 1176706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:55:51.349252 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:55:51.357284 1176706 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:45 /etc/kubernetes/scheduler.conf
	
	I1217 00:55:51.357346 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:55:51.365155 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:55:51.373122 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.373177 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:55:51.380532 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.387880 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.387941 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.395488 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:55:51.402971 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.403027 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:55:51.410207 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:55:51.417914 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:51.465120 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.243254 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.461995 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.527345 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.573822 1176706 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:55:52.573908 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.074814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.574907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.075012 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.575023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.574684 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.074609 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.574663 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.074765 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.574635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.074907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.574627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.074088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.574795 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.097233 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.574961 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.074054 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.574065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.075050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.574031 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.075006 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.574216 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.074748 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.573974 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.074753 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.574034 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.075017 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.574061 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.074905 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.574698 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.074763 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.574614 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.074085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.574076 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.074847 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.574675 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.074172 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.574715 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.074369 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.574662 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.074071 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.575002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.074917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.574153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.074723 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.574433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.074632 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.574760 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.074421 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.574365 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.074110 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.574084 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.074083 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.574229 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.075007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.574915 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.074637 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.574418 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.074231 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.574859 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.074383 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.574046 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.074153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.574749 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.074247 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.574077 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.074002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.574149 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.074309 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.574050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.074975 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.574187 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.074918 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.574916 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.074771 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.574779 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.074798 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.573985 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.074834 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.574776 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.074670 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.574866 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.074740 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.574090 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.074115 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.574007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.074661 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.574687 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.074553 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.574236 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.074239 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.574036 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.074932 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.574096 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.074026 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.574255 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.074880 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.574038 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.073993 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.574088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.574323 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.074338 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.574154 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.074792 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.574063 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.074852 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.574810 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.074586 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.574043 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.075023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.574226 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.074137 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.585259 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.074119 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.573988 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.074068 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.575029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.074819 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.574056 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:52.574153 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:52.600363 1176706 cri.go:89] found id: ""
	I1217 00:56:52.600377 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.600384 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:52.600390 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:52.600466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:52.625666 1176706 cri.go:89] found id: ""
	I1217 00:56:52.625679 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.625686 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:52.625692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:52.625750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:52.651207 1176706 cri.go:89] found id: ""
	I1217 00:56:52.651220 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.651228 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:52.651233 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:52.651289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:52.675877 1176706 cri.go:89] found id: ""
	I1217 00:56:52.675891 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.675898 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:52.675904 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:52.675968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:52.705638 1176706 cri.go:89] found id: ""
	I1217 00:56:52.705651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.705658 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:52.705663 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:52.705733 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:52.734795 1176706 cri.go:89] found id: ""
	I1217 00:56:52.734809 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.734816 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:52.734821 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:52.734882 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:52.765098 1176706 cri.go:89] found id: ""
	I1217 00:56:52.765112 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.765119 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:52.765127 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:52.765138 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:52.797741 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:52.797759 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:52.872988 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:52.873007 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:52.891536 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:52.891552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:52.956983 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:52.956994 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:52.957004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.530194 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:55.540066 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:55.540129 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:55.566494 1176706 cri.go:89] found id: ""
	I1217 00:56:55.566509 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.566516 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:55.566521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:55.566579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:55.599453 1176706 cri.go:89] found id: ""
	I1217 00:56:55.599467 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.599474 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:55.599479 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:55.599539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:55.624628 1176706 cri.go:89] found id: ""
	I1217 00:56:55.624651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.624659 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:55.624664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:55.624720 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:55.650853 1176706 cri.go:89] found id: ""
	I1217 00:56:55.650867 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.650874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:55.650879 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:55.650947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:55.676274 1176706 cri.go:89] found id: ""
	I1217 00:56:55.676287 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.676295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:55.676302 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:55.676363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:55.705470 1176706 cri.go:89] found id: ""
	I1217 00:56:55.705484 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.705491 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:55.705497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:55.705577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:55.729482 1176706 cri.go:89] found id: ""
	I1217 00:56:55.729495 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.729502 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:55.729510 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:55.729520 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:55.797202 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:55.797223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:55.816424 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:55.816452 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:55.887945 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:55.887971 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:55.887984 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.962011 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:55.962032 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:58.492176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:58.503876 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:58.503952 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:58.530086 1176706 cri.go:89] found id: ""
	I1217 00:56:58.530101 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.530108 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:58.530114 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:58.530175 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:58.556063 1176706 cri.go:89] found id: ""
	I1217 00:56:58.556077 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.556084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:58.556090 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:58.556148 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:58.582188 1176706 cri.go:89] found id: ""
	I1217 00:56:58.582202 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.582209 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:58.582215 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:58.582295 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:58.607569 1176706 cri.go:89] found id: ""
	I1217 00:56:58.607583 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.607590 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:58.607595 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:58.607652 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:58.634350 1176706 cri.go:89] found id: ""
	I1217 00:56:58.634364 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.634371 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:58.634378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:58.634445 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:58.664026 1176706 cri.go:89] found id: ""
	I1217 00:56:58.664040 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.664048 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:58.664053 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:58.664114 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:58.689017 1176706 cri.go:89] found id: ""
	I1217 00:56:58.689030 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.689037 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:58.689050 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:58.689060 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:58.754795 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:58.754815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:58.775189 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:58.775206 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:58.849221 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:58.849231 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:58.849243 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:58.922086 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:58.922107 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.451030 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:01.460964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:01.461034 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:01.489661 1176706 cri.go:89] found id: ""
	I1217 00:57:01.489685 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.489693 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:01.489698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:01.489767 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:01.515445 1176706 cri.go:89] found id: ""
	I1217 00:57:01.515468 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.515476 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:01.515482 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:01.515549 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:01.540532 1176706 cri.go:89] found id: ""
	I1217 00:57:01.540546 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.540554 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:01.540560 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:01.540629 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:01.569650 1176706 cri.go:89] found id: ""
	I1217 00:57:01.569664 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.569671 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:01.569676 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:01.569738 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:01.596059 1176706 cri.go:89] found id: ""
	I1217 00:57:01.596072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.596080 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:01.596085 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:01.596140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:01.621197 1176706 cri.go:89] found id: ""
	I1217 00:57:01.621211 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.621218 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:01.621224 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:01.621282 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:01.650001 1176706 cri.go:89] found id: ""
	I1217 00:57:01.650014 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.650022 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:01.650029 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:01.650040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:01.667789 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:01.667805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:01.730637 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:01.730688 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:01.730705 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:01.804764 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:01.804783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.853135 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:01.853152 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.422102 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:04.432445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:04.432511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:04.456733 1176706 cri.go:89] found id: ""
	I1217 00:57:04.456747 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.456754 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:04.456760 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:04.456817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:04.481576 1176706 cri.go:89] found id: ""
	I1217 00:57:04.481591 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.481599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:04.481604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:04.481663 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:04.511390 1176706 cri.go:89] found id: ""
	I1217 00:57:04.511405 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.511412 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:04.511417 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:04.511481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:04.539584 1176706 cri.go:89] found id: ""
	I1217 00:57:04.539608 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.539615 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:04.539621 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:04.539686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:04.564039 1176706 cri.go:89] found id: ""
	I1217 00:57:04.564054 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.564061 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:04.564067 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:04.564126 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:04.588270 1176706 cri.go:89] found id: ""
	I1217 00:57:04.588283 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.588291 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:04.588296 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:04.588352 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:04.615420 1176706 cri.go:89] found id: ""
	I1217 00:57:04.615435 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.615442 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:04.615450 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:04.615461 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:04.648626 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:04.648647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.714893 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:04.714913 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:04.733517 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:04.733535 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:04.824195 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:04.824206 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:04.824217 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.400917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:07.410917 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:07.410975 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:07.437282 1176706 cri.go:89] found id: ""
	I1217 00:57:07.437303 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.437315 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:07.437325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:07.437414 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:07.466491 1176706 cri.go:89] found id: ""
	I1217 00:57:07.466506 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.466513 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:07.466518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:07.466585 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:07.491017 1176706 cri.go:89] found id: ""
	I1217 00:57:07.491030 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.491037 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:07.491042 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:07.491100 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:07.516269 1176706 cri.go:89] found id: ""
	I1217 00:57:07.516288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.516295 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:07.516301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:07.516370 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:07.541854 1176706 cri.go:89] found id: ""
	I1217 00:57:07.541867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.541874 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:07.541880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:07.541948 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:07.571479 1176706 cri.go:89] found id: ""
	I1217 00:57:07.571493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.571509 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:07.571516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:07.571576 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:07.597046 1176706 cri.go:89] found id: ""
	I1217 00:57:07.597072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.597079 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:07.597087 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:07.597097 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:07.672318 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:07.672336 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:07.672349 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.747576 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:07.747595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:07.779509 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:07.779525 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:07.855959 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:07.855980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.376085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:10.386576 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:10.386639 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:10.413996 1176706 cri.go:89] found id: ""
	I1217 00:57:10.414010 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.414017 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:10.414022 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:10.414082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:10.440046 1176706 cri.go:89] found id: ""
	I1217 00:57:10.440060 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.440067 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:10.440073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:10.440131 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:10.465533 1176706 cri.go:89] found id: ""
	I1217 00:57:10.465547 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.465563 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:10.465569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:10.465631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:10.491563 1176706 cri.go:89] found id: ""
	I1217 00:57:10.491577 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.491585 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:10.491590 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:10.491653 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:10.519680 1176706 cri.go:89] found id: ""
	I1217 00:57:10.519694 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.519710 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:10.519717 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:10.519778 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:10.556939 1176706 cri.go:89] found id: ""
	I1217 00:57:10.556956 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.556963 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:10.556969 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:10.557025 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:10.582061 1176706 cri.go:89] found id: ""
	I1217 00:57:10.582075 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.582082 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:10.582091 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:10.582102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:10.651854 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:10.651875 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.671002 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:10.671020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:10.744191 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:10.744201 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:10.744213 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:10.823224 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:10.823244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:13.353067 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:13.363299 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:13.363363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:13.388077 1176706 cri.go:89] found id: ""
	I1217 00:57:13.388090 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.388098 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:13.388103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:13.388166 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:13.414095 1176706 cri.go:89] found id: ""
	I1217 00:57:13.414109 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.414117 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:13.414122 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:13.414178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:13.439153 1176706 cri.go:89] found id: ""
	I1217 00:57:13.439167 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.439174 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:13.439180 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:13.439237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:13.465255 1176706 cri.go:89] found id: ""
	I1217 00:57:13.465269 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.465277 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:13.465282 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:13.465342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:13.495274 1176706 cri.go:89] found id: ""
	I1217 00:57:13.495288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.495295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:13.495301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:13.495359 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:13.520781 1176706 cri.go:89] found id: ""
	I1217 00:57:13.520795 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.520803 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:13.520808 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:13.520868 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:13.547934 1176706 cri.go:89] found id: ""
	I1217 00:57:13.547948 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.547955 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:13.547963 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:13.547974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:13.613843 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:13.613863 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:13.632465 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:13.632491 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:13.697651 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:13.697662 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:13.697673 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:13.766608 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:13.766627 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:16.302176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:16.312389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:16.312476 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:16.338447 1176706 cri.go:89] found id: ""
	I1217 00:57:16.338461 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.338468 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:16.338473 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:16.338533 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:16.365319 1176706 cri.go:89] found id: ""
	I1217 00:57:16.365333 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.365340 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:16.365346 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:16.365408 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:16.396455 1176706 cri.go:89] found id: ""
	I1217 00:57:16.396476 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.396483 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:16.396489 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:16.396550 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:16.425795 1176706 cri.go:89] found id: ""
	I1217 00:57:16.425809 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.425816 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:16.425822 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:16.425887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:16.454749 1176706 cri.go:89] found id: ""
	I1217 00:57:16.454763 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.454770 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:16.454776 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:16.454834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:16.479542 1176706 cri.go:89] found id: ""
	I1217 00:57:16.479555 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.479562 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:16.479567 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:16.479626 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:16.508783 1176706 cri.go:89] found id: ""
	I1217 00:57:16.508798 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.508805 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:16.508813 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:16.508824 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:16.577494 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:16.577515 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:16.595191 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:16.595211 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:16.665505 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:16.665516 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:16.665528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:16.733110 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:16.733132 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:19.271702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:19.282422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:19.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:19.310766 1176706 cri.go:89] found id: ""
	I1217 00:57:19.310781 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.310788 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:19.310794 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:19.310856 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:19.336393 1176706 cri.go:89] found id: ""
	I1217 00:57:19.336407 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.336435 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:19.336441 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:19.336512 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:19.363243 1176706 cri.go:89] found id: ""
	I1217 00:57:19.363258 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.363265 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:19.363270 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:19.363329 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:19.389985 1176706 cri.go:89] found id: ""
	I1217 00:57:19.390000 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.390007 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:19.390013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:19.390073 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:19.416019 1176706 cri.go:89] found id: ""
	I1217 00:57:19.416032 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.416040 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:19.416045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:19.416103 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:19.445523 1176706 cri.go:89] found id: ""
	I1217 00:57:19.445538 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.445545 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:19.445550 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:19.445611 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:19.470033 1176706 cri.go:89] found id: ""
	I1217 00:57:19.470047 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.470055 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:19.470063 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:19.470075 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:19.535642 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:19.535662 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:19.553701 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:19.553718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:19.615955 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:19.615966 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:19.615977 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:19.685077 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:19.685098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.217382 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:22.227714 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:22.227775 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:22.252242 1176706 cri.go:89] found id: ""
	I1217 00:57:22.252256 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.252263 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:22.252268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:22.252325 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:22.277476 1176706 cri.go:89] found id: ""
	I1217 00:57:22.277491 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.277498 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:22.277504 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:22.277561 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:22.302807 1176706 cri.go:89] found id: ""
	I1217 00:57:22.302821 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.302829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:22.302834 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:22.302905 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:22.332455 1176706 cri.go:89] found id: ""
	I1217 00:57:22.332469 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.332476 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:22.332483 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:22.332552 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:22.361365 1176706 cri.go:89] found id: ""
	I1217 00:57:22.361380 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.361387 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:22.361392 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:22.361453 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:22.387211 1176706 cri.go:89] found id: ""
	I1217 00:57:22.387224 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.387232 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:22.387237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:22.387297 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:22.413238 1176706 cri.go:89] found id: ""
	I1217 00:57:22.413252 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.413260 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:22.413267 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:22.413278 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:22.478085 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:22.478096 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:22.478105 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:22.546790 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:22.546813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.582711 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:22.582732 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:22.648758 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:22.648780 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.166726 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:25.177337 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:25.177400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:25.202561 1176706 cri.go:89] found id: ""
	I1217 00:57:25.202576 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.202583 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:25.202589 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:25.202650 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:25.231070 1176706 cri.go:89] found id: ""
	I1217 00:57:25.231085 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.231092 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:25.231098 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:25.231162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:25.256786 1176706 cri.go:89] found id: ""
	I1217 00:57:25.256799 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.256806 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:25.256811 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:25.256870 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:25.282392 1176706 cri.go:89] found id: ""
	I1217 00:57:25.282415 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.282423 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:25.282429 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:25.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:25.311168 1176706 cri.go:89] found id: ""
	I1217 00:57:25.311182 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.311189 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:25.311195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:25.311259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:25.339431 1176706 cri.go:89] found id: ""
	I1217 00:57:25.339446 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.339453 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:25.339459 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:25.339517 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:25.365122 1176706 cri.go:89] found id: ""
	I1217 00:57:25.365136 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.365144 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:25.365152 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:25.365162 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:25.430307 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:25.430326 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.447805 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:25.447822 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:25.515790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:25.515802 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:25.515813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:25.590022 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:25.590049 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.122003 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:28.132581 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:28.132644 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:28.158913 1176706 cri.go:89] found id: ""
	I1217 00:57:28.158927 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.158944 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:28.158950 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:28.159029 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:28.185443 1176706 cri.go:89] found id: ""
	I1217 00:57:28.185478 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.185486 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:28.185492 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:28.185565 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:28.212156 1176706 cri.go:89] found id: ""
	I1217 00:57:28.212180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.212187 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:28.212193 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:28.212303 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:28.238113 1176706 cri.go:89] found id: ""
	I1217 00:57:28.238128 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.238135 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:28.238140 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:28.238198 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:28.267252 1176706 cri.go:89] found id: ""
	I1217 00:57:28.267266 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.267273 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:28.267278 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:28.267335 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:28.299262 1176706 cri.go:89] found id: ""
	I1217 00:57:28.299277 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.299284 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:28.299290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:28.299349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:28.325216 1176706 cri.go:89] found id: ""
	I1217 00:57:28.325231 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.325247 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:28.325255 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:28.325267 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:28.342976 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:28.342992 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:28.411022 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:28.411033 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:28.411044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:28.479626 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:28.479647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.508235 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:28.508251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.075024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:31.085476 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:31.085543 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:31.115244 1176706 cri.go:89] found id: ""
	I1217 00:57:31.115259 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.115267 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:31.115272 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:31.115332 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:31.146093 1176706 cri.go:89] found id: ""
	I1217 00:57:31.146111 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.146119 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:31.146125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:31.146188 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:31.173490 1176706 cri.go:89] found id: ""
	I1217 00:57:31.173505 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.173512 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:31.173518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:31.173577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:31.199862 1176706 cri.go:89] found id: ""
	I1217 00:57:31.199876 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.199883 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:31.199889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:31.199953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:31.229151 1176706 cri.go:89] found id: ""
	I1217 00:57:31.229164 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.229172 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:31.229177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:31.229234 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:31.255292 1176706 cri.go:89] found id: ""
	I1217 00:57:31.255306 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.255313 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:31.255319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:31.255378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:31.280011 1176706 cri.go:89] found id: ""
	I1217 00:57:31.280024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.280032 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:31.280040 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:31.280050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:31.351624 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:31.351644 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:31.380210 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:31.380226 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.448265 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:31.448288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:31.466144 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:31.466161 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:31.530079 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.030804 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:34.041923 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:34.041984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:34.070601 1176706 cri.go:89] found id: ""
	I1217 00:57:34.070617 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.070624 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:34.070630 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:34.070689 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:34.097552 1176706 cri.go:89] found id: ""
	I1217 00:57:34.097566 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.097573 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:34.097579 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:34.097647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:34.124476 1176706 cri.go:89] found id: ""
	I1217 00:57:34.124490 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.124497 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:34.124503 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:34.124580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:34.150077 1176706 cri.go:89] found id: ""
	I1217 00:57:34.150091 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.150099 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:34.150104 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:34.150162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:34.176964 1176706 cri.go:89] found id: ""
	I1217 00:57:34.176978 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.176992 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:34.176998 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:34.177055 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:34.201831 1176706 cri.go:89] found id: ""
	I1217 00:57:34.201845 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.201852 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:34.201857 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:34.201914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:34.227100 1176706 cri.go:89] found id: ""
	I1217 00:57:34.227114 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.227122 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:34.227129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:34.227140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:34.292098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.292108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:34.292119 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:34.361262 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:34.361287 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:34.395072 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:34.395087 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:34.462475 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:34.462498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:36.980702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:36.992944 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:36.993003 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:37.025574 1176706 cri.go:89] found id: ""
	I1217 00:57:37.025592 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.025616 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:37.025622 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:37.025707 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:37.054876 1176706 cri.go:89] found id: ""
	I1217 00:57:37.054890 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.054897 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:37.054903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:37.054968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:37.084974 1176706 cri.go:89] found id: ""
	I1217 00:57:37.084987 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.084995 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:37.085000 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:37.085059 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:37.110853 1176706 cri.go:89] found id: ""
	I1217 00:57:37.110867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.110874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:37.110883 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:37.110941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:37.137064 1176706 cri.go:89] found id: ""
	I1217 00:57:37.137083 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.137090 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:37.137096 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:37.137159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:37.167116 1176706 cri.go:89] found id: ""
	I1217 00:57:37.167130 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.167148 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:37.167162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:37.167230 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:37.192827 1176706 cri.go:89] found id: ""
	I1217 00:57:37.192848 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.192856 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:37.192863 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:37.192874 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:37.210956 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:37.210974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:37.275882 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:37.275893 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:37.275904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:37.344194 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:37.344215 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:37.375642 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:37.375658 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:39.944605 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:39.954951 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:39.955014 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:39.984302 1176706 cri.go:89] found id: ""
	I1217 00:57:39.984316 1176706 logs.go:282] 0 containers: []
	W1217 00:57:39.984323 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:39.984328 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:39.984383 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:40.029442 1176706 cri.go:89] found id: ""
	I1217 00:57:40.029458 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.029466 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:40.029471 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:40.029538 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:40.063022 1176706 cri.go:89] found id: ""
	I1217 00:57:40.063037 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.063044 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:40.063049 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:40.063110 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:40.094257 1176706 cri.go:89] found id: ""
	I1217 00:57:40.094272 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.094280 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:40.094286 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:40.094349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:40.127887 1176706 cri.go:89] found id: ""
	I1217 00:57:40.127901 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.127908 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:40.127913 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:40.127972 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:40.155475 1176706 cri.go:89] found id: ""
	I1217 00:57:40.155489 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.155496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:40.155502 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:40.155560 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:40.181940 1176706 cri.go:89] found id: ""
	I1217 00:57:40.181955 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.181962 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:40.181970 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:40.181980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:40.254464 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:40.254484 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:40.285810 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:40.285825 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:40.352509 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:40.352528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:40.370334 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:40.370356 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:40.432624 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:42.932898 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:42.943186 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:42.943245 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:42.971121 1176706 cri.go:89] found id: ""
	I1217 00:57:42.971137 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.971144 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:42.971149 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:42.971207 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:42.997154 1176706 cri.go:89] found id: ""
	I1217 00:57:42.997169 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.997175 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:42.997181 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:42.997240 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:43.034752 1176706 cri.go:89] found id: ""
	I1217 00:57:43.034767 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.034775 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:43.034781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:43.034840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:43.064326 1176706 cri.go:89] found id: ""
	I1217 00:57:43.064339 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.064347 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:43.064352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:43.064428 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:43.095997 1176706 cri.go:89] found id: ""
	I1217 00:57:43.096011 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.096019 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:43.096024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:43.096082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:43.126545 1176706 cri.go:89] found id: ""
	I1217 00:57:43.126560 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.126568 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:43.126573 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:43.126633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:43.157043 1176706 cri.go:89] found id: ""
	I1217 00:57:43.157058 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.157065 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:43.157073 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:43.157102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:43.223228 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:43.223248 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:43.241053 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:43.241070 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:43.307388 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:43.307398 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:43.307409 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:43.376649 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:43.376669 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:45.908814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:45.918992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:45.919051 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:45.944157 1176706 cri.go:89] found id: ""
	I1217 00:57:45.944170 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.944178 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:45.944183 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:45.944242 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:45.969417 1176706 cri.go:89] found id: ""
	I1217 00:57:45.969431 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.969438 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:45.969444 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:45.969502 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:45.995472 1176706 cri.go:89] found id: ""
	I1217 00:57:45.995486 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.995494 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:45.995499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:45.995566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:46.034994 1176706 cri.go:89] found id: ""
	I1217 00:57:46.035007 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.035015 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:46.035020 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:46.035081 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:46.065460 1176706 cri.go:89] found id: ""
	I1217 00:57:46.065473 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.065480 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:46.065486 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:46.065559 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:46.092450 1176706 cri.go:89] found id: ""
	I1217 00:57:46.092465 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.092472 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:46.092478 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:46.092557 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:46.122198 1176706 cri.go:89] found id: ""
	I1217 00:57:46.122212 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.122221 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:46.122229 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:46.122241 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:46.140129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:46.140147 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:46.204790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:46.204800 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:46.204810 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:46.273034 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:46.273054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:46.300763 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:46.300778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:48.875764 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:48.886304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:48.886369 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:48.923231 1176706 cri.go:89] found id: ""
	I1217 00:57:48.923246 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.923254 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:48.923259 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:48.923334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:48.951521 1176706 cri.go:89] found id: ""
	I1217 00:57:48.951536 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.951544 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:48.951549 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:48.951610 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:48.977574 1176706 cri.go:89] found id: ""
	I1217 00:57:48.977588 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.977595 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:48.977600 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:48.977661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:49.016389 1176706 cri.go:89] found id: ""
	I1217 00:57:49.016402 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.016410 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:49.016446 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:49.016511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:49.050180 1176706 cri.go:89] found id: ""
	I1217 00:57:49.050193 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.050201 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:49.050206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:49.050271 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:49.088387 1176706 cri.go:89] found id: ""
	I1217 00:57:49.088401 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.088409 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:49.088445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:49.088508 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:49.118579 1176706 cri.go:89] found id: ""
	I1217 00:57:49.118593 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.118600 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:49.118608 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:49.118618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:49.189917 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:49.189938 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:49.208217 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:49.208234 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:49.270961 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:49.270977 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:49.270988 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:49.340033 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:49.340054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:51.873428 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:51.883781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:51.883840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:51.908479 1176706 cri.go:89] found id: ""
	I1217 00:57:51.908493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.908500 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:51.908505 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:51.908562 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:51.938045 1176706 cri.go:89] found id: ""
	I1217 00:57:51.938061 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.938068 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:51.938073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:51.938135 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:51.964570 1176706 cri.go:89] found id: ""
	I1217 00:57:51.964585 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.964592 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:51.964597 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:51.964654 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:51.989700 1176706 cri.go:89] found id: ""
	I1217 00:57:51.989714 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.989722 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:51.989727 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:51.989784 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:52.030756 1176706 cri.go:89] found id: ""
	I1217 00:57:52.030771 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.030779 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:52.030786 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:52.030860 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:52.067806 1176706 cri.go:89] found id: ""
	I1217 00:57:52.067829 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.067838 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:52.067845 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:52.067915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:52.097071 1176706 cri.go:89] found id: ""
	I1217 00:57:52.097102 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.097110 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:52.097118 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:52.097128 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:52.169931 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:52.169952 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:52.202012 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:52.202031 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:52.267897 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:52.267917 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:52.286898 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:52.286920 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:52.352095 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:54.853773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:54.863649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:54.863712 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:54.888435 1176706 cri.go:89] found id: ""
	I1217 00:57:54.888449 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.888456 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:54.888462 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:54.888523 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:54.927009 1176706 cri.go:89] found id: ""
	I1217 00:57:54.927024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.927031 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:54.927037 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:54.927095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:54.953405 1176706 cri.go:89] found id: ""
	I1217 00:57:54.953420 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.953428 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:54.953434 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:54.953493 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:54.979162 1176706 cri.go:89] found id: ""
	I1217 00:57:54.979176 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.979183 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:54.979189 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:54.979256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:55.025542 1176706 cri.go:89] found id: ""
	I1217 00:57:55.025564 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.025572 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:55.025577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:55.025641 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:55.059408 1176706 cri.go:89] found id: ""
	I1217 00:57:55.059422 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.059429 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:55.059435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:55.059492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:55.085846 1176706 cri.go:89] found id: ""
	I1217 00:57:55.085860 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.085867 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:55.085875 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:55.085884 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:55.154061 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:55.154083 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:55.182650 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:55.182667 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:55.252924 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:55.252945 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:55.271464 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:55.271481 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:55.340175 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:57.840461 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:57.853057 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:57.853178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:57.883066 1176706 cri.go:89] found id: ""
	I1217 00:57:57.883081 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.883088 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:57.883094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:57.883152 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:57.909166 1176706 cri.go:89] found id: ""
	I1217 00:57:57.909180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.909189 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:57.909195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:57.909255 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:57.935701 1176706 cri.go:89] found id: ""
	I1217 00:57:57.935716 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.935733 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:57.935739 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:57.935805 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:57.969374 1176706 cri.go:89] found id: ""
	I1217 00:57:57.969397 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.969404 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:57.969410 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:57.969481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:57.995365 1176706 cri.go:89] found id: ""
	I1217 00:57:57.995379 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.995397 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:57.995404 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:57.995460 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:58.025187 1176706 cri.go:89] found id: ""
	I1217 00:57:58.025207 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.025215 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:58.025221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:58.025343 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:58.062705 1176706 cri.go:89] found id: ""
	I1217 00:57:58.062719 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.062738 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:58.062745 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:58.062755 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:58.135108 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:58.135129 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:58.154038 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:58.154058 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:58.219558 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:58.219569 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:58.219582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:58.287658 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:58.287678 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:00.817470 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:00.827992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:00.828056 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:00.852955 1176706 cri.go:89] found id: ""
	I1217 00:58:00.852969 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.852976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:00.852983 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:00.853043 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:00.877725 1176706 cri.go:89] found id: ""
	I1217 00:58:00.877739 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.877746 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:00.877751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:00.877811 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:00.901883 1176706 cri.go:89] found id: ""
	I1217 00:58:00.901897 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.901905 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:00.901910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:00.901965 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:00.928695 1176706 cri.go:89] found id: ""
	I1217 00:58:00.928709 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.928716 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:00.928722 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:00.928780 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:00.953517 1176706 cri.go:89] found id: ""
	I1217 00:58:00.953531 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.953538 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:00.953544 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:00.953601 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:00.982916 1176706 cri.go:89] found id: ""
	I1217 00:58:00.982930 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.982946 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:00.982952 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:00.983021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:01.012486 1176706 cri.go:89] found id: ""
	I1217 00:58:01.012510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:01.012518 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:01.012526 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:01.012538 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:01.034573 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:01.034595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:01.107160 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:01.107170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:01.107180 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:01.180136 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:01.180158 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:01.212434 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:01.212451 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:03.780773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:03.791245 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:03.791309 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:03.819281 1176706 cri.go:89] found id: ""
	I1217 00:58:03.819296 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.819304 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:03.819309 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:03.819367 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:03.847330 1176706 cri.go:89] found id: ""
	I1217 00:58:03.847344 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.847351 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:03.847357 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:03.847416 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:03.874793 1176706 cri.go:89] found id: ""
	I1217 00:58:03.874806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.874814 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:03.874819 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:03.874883 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:03.904651 1176706 cri.go:89] found id: ""
	I1217 00:58:03.904665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.904672 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:03.904678 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:03.904744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:03.930157 1176706 cri.go:89] found id: ""
	I1217 00:58:03.930178 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.930186 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:03.930191 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:03.930252 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:03.960348 1176706 cri.go:89] found id: ""
	I1217 00:58:03.960371 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.960380 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:03.960386 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:03.960473 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:03.985501 1176706 cri.go:89] found id: ""
	I1217 00:58:03.985515 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.985523 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:03.985530 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:03.985541 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:04.005563 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:04.005592 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:04.085204 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:04.085219 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:04.085231 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:04.154363 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:04.154385 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:04.182481 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:04.182498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:06.754413 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:06.765192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:06.765266 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:06.792761 1176706 cri.go:89] found id: ""
	I1217 00:58:06.792779 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.792786 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:06.792791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:06.792850 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:06.817882 1176706 cri.go:89] found id: ""
	I1217 00:58:06.817896 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.817903 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:06.817909 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:06.817967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:06.843295 1176706 cri.go:89] found id: ""
	I1217 00:58:06.843309 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.843316 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:06.843321 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:06.843380 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:06.871025 1176706 cri.go:89] found id: ""
	I1217 00:58:06.871039 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.871046 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:06.871052 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:06.871109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:06.899109 1176706 cri.go:89] found id: ""
	I1217 00:58:06.899124 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.899132 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:06.899137 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:06.899212 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:06.923948 1176706 cri.go:89] found id: ""
	I1217 00:58:06.923962 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.923980 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:06.923987 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:06.924045 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:06.948813 1176706 cri.go:89] found id: ""
	I1217 00:58:06.948827 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.948834 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:06.948842 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:06.948853 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:07.015114 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:07.015140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:07.034991 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:07.035010 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:07.105757 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:07.105767 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:07.105778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:07.177693 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:07.177717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:09.709755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:09.720409 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:09.720507 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:09.745603 1176706 cri.go:89] found id: ""
	I1217 00:58:09.745618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.745626 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:09.745631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:09.745691 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:09.775492 1176706 cri.go:89] found id: ""
	I1217 00:58:09.775507 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.775515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:09.775520 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:09.775579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:09.801149 1176706 cri.go:89] found id: ""
	I1217 00:58:09.801164 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.801171 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:09.801177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:09.801238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:09.830147 1176706 cri.go:89] found id: ""
	I1217 00:58:09.830160 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.830168 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:09.830173 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:09.830232 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:09.858791 1176706 cri.go:89] found id: ""
	I1217 00:58:09.858806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.858825 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:09.858832 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:09.858911 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:09.884827 1176706 cri.go:89] found id: ""
	I1217 00:58:09.884842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.884849 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:09.884855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:09.884918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:09.910380 1176706 cri.go:89] found id: ""
	I1217 00:58:09.910394 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.910402 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:09.910409 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:09.910420 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:09.976905 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:09.976924 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:09.995004 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:09.995027 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:10.084593 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:10.084604 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:10.084614 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:10.157583 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:10.157604 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.691225 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:12.701275 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:12.701340 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:12.730986 1176706 cri.go:89] found id: ""
	I1217 00:58:12.731000 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.731018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:12.731024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:12.731084 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:12.757010 1176706 cri.go:89] found id: ""
	I1217 00:58:12.757029 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.757037 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:12.757045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:12.757119 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:12.782232 1176706 cri.go:89] found id: ""
	I1217 00:58:12.782245 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.782252 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:12.782257 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:12.782314 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:12.808352 1176706 cri.go:89] found id: ""
	I1217 00:58:12.808366 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.808373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:12.808378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:12.808472 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:12.834094 1176706 cri.go:89] found id: ""
	I1217 00:58:12.834109 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.834116 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:12.834121 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:12.834184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:12.861537 1176706 cri.go:89] found id: ""
	I1217 00:58:12.861551 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.861558 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:12.861564 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:12.861625 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:12.891320 1176706 cri.go:89] found id: ""
	I1217 00:58:12.891334 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.891351 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:12.891360 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:12.891373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:12.961252 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:12.961272 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.990873 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:12.990889 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:13.068166 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:13.068185 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:13.087641 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:13.087660 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:13.158967 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:15.660635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:15.670593 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:15.670685 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:15.695674 1176706 cri.go:89] found id: ""
	I1217 00:58:15.695688 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.695695 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:15.695700 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:15.695757 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:15.723007 1176706 cri.go:89] found id: ""
	I1217 00:58:15.723020 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.723028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:15.723033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:15.723093 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:15.752134 1176706 cri.go:89] found id: ""
	I1217 00:58:15.752149 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.752156 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:15.752161 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:15.752219 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:15.777521 1176706 cri.go:89] found id: ""
	I1217 00:58:15.777535 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.777542 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:15.777547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:15.777606 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:15.805205 1176706 cri.go:89] found id: ""
	I1217 00:58:15.805220 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.805233 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:15.805239 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:15.805296 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:15.830102 1176706 cri.go:89] found id: ""
	I1217 00:58:15.830116 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.830123 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:15.830129 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:15.830191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:15.859258 1176706 cri.go:89] found id: ""
	I1217 00:58:15.859272 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.859279 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:15.859297 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:15.859307 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:15.924910 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:15.924930 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:15.943203 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:15.943219 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:16.011016 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:16.011027 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:16.011038 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:16.094076 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:16.094096 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:18.624032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:18.634861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:18.634925 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:18.660502 1176706 cri.go:89] found id: ""
	I1217 00:58:18.660528 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.660536 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:18.660541 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:18.660600 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:18.685828 1176706 cri.go:89] found id: ""
	I1217 00:58:18.685841 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.685848 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:18.685854 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:18.685920 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:18.716173 1176706 cri.go:89] found id: ""
	I1217 00:58:18.716187 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.716194 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:18.716199 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:18.716260 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:18.742960 1176706 cri.go:89] found id: ""
	I1217 00:58:18.742975 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.742983 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:18.742988 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:18.743046 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:18.768597 1176706 cri.go:89] found id: ""
	I1217 00:58:18.768610 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.768623 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:18.768628 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:18.768687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:18.795244 1176706 cri.go:89] found id: ""
	I1217 00:58:18.795267 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.795276 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:18.795281 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:18.795355 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:18.826316 1176706 cri.go:89] found id: ""
	I1217 00:58:18.826330 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.826337 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:18.826345 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:18.826354 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:18.892936 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:18.892954 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:18.911274 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:18.911292 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:18.973399 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:18.973409 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:18.973432 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:19.052103 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:19.052124 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.589056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:21.599320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:21.599382 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:21.626547 1176706 cri.go:89] found id: ""
	I1217 00:58:21.626561 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.626568 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:21.626574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:21.626631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:21.651881 1176706 cri.go:89] found id: ""
	I1217 00:58:21.651895 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.651902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:21.651910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:21.651967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:21.677496 1176706 cri.go:89] found id: ""
	I1217 00:58:21.677510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.677519 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:21.677524 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:21.677580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:21.701536 1176706 cri.go:89] found id: ""
	I1217 00:58:21.701550 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.701557 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:21.701562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:21.701619 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:21.725663 1176706 cri.go:89] found id: ""
	I1217 00:58:21.725677 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.725695 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:21.725701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:21.725772 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:21.749912 1176706 cri.go:89] found id: ""
	I1217 00:58:21.749926 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.749937 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:21.749943 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:21.750000 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:21.774360 1176706 cri.go:89] found id: ""
	I1217 00:58:21.774374 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.774381 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:21.774389 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:21.774399 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:21.841964 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:21.841983 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.870200 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:21.870218 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:21.943734 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:21.943754 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:21.961798 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:21.961816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:22.037147 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.537433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:24.547596 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:24.547661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:24.575282 1176706 cri.go:89] found id: ""
	I1217 00:58:24.575297 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.575306 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:24.575312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:24.575371 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:24.600578 1176706 cri.go:89] found id: ""
	I1217 00:58:24.600592 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.600599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:24.600604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:24.600665 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:24.626604 1176706 cri.go:89] found id: ""
	I1217 00:58:24.626618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.626626 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:24.626631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:24.626687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:24.652284 1176706 cri.go:89] found id: ""
	I1217 00:58:24.652298 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.652316 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:24.652323 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:24.652381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:24.681413 1176706 cri.go:89] found id: ""
	I1217 00:58:24.681426 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.681433 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:24.681439 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:24.681495 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:24.709801 1176706 cri.go:89] found id: ""
	I1217 00:58:24.709815 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.709822 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:24.709830 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:24.709887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:24.741982 1176706 cri.go:89] found id: ""
	I1217 00:58:24.741995 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.742010 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:24.742018 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:24.742029 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:24.806559 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.806571 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:24.806581 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:24.875943 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:24.875962 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:24.904944 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:24.904960 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:24.972857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:24.972878 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.491741 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:27.502162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:27.502241 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:27.528328 1176706 cri.go:89] found id: ""
	I1217 00:58:27.528343 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.528350 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:27.528356 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:27.528455 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:27.558520 1176706 cri.go:89] found id: ""
	I1217 00:58:27.558534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.558541 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:27.558547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:27.558605 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:27.587047 1176706 cri.go:89] found id: ""
	I1217 00:58:27.587061 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.587070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:27.587075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:27.587133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:27.615351 1176706 cri.go:89] found id: ""
	I1217 00:58:27.615365 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.615373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:27.615381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:27.615443 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:27.640936 1176706 cri.go:89] found id: ""
	I1217 00:58:27.640950 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.640959 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:27.640964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:27.641021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:27.667985 1176706 cri.go:89] found id: ""
	I1217 00:58:27.667999 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.668007 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:27.668013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:27.668077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:27.694148 1176706 cri.go:89] found id: ""
	I1217 00:58:27.694162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.694170 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:27.694177 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:27.694188 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:27.764618 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:27.764639 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.784025 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:27.784040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:27.852310 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:27.852320 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:27.852331 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:27.922044 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:27.922065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:30.450766 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:30.460791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:30.460852 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:30.485997 1176706 cri.go:89] found id: ""
	I1217 00:58:30.486011 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.486018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:30.486023 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:30.486080 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:30.512125 1176706 cri.go:89] found id: ""
	I1217 00:58:30.512138 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.512157 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:30.512163 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:30.512221 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:30.538512 1176706 cri.go:89] found id: ""
	I1217 00:58:30.538526 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.538533 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:30.538539 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:30.538597 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:30.564757 1176706 cri.go:89] found id: ""
	I1217 00:58:30.564771 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.564778 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:30.564784 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:30.564842 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:30.594808 1176706 cri.go:89] found id: ""
	I1217 00:58:30.594821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.594840 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:30.594846 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:30.594919 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:30.624595 1176706 cri.go:89] found id: ""
	I1217 00:58:30.624609 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.624617 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:30.624623 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:30.624683 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:30.653013 1176706 cri.go:89] found id: ""
	I1217 00:58:30.653027 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.653034 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:30.653042 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:30.653052 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:30.720030 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:30.720050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:30.738237 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:30.738255 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:30.801692 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:30.801705 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:30.801717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:30.870606 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:30.870628 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.401439 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:33.411804 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:33.411865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:33.437664 1176706 cri.go:89] found id: ""
	I1217 00:58:33.437678 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.437686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:33.437692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:33.437752 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:33.463774 1176706 cri.go:89] found id: ""
	I1217 00:58:33.463796 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.463803 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:33.463809 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:33.463865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:33.492800 1176706 cri.go:89] found id: ""
	I1217 00:58:33.492822 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.492829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:33.492835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:33.492896 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:33.518396 1176706 cri.go:89] found id: ""
	I1217 00:58:33.518410 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.518417 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:33.518422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:33.518481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:33.545369 1176706 cri.go:89] found id: ""
	I1217 00:58:33.545385 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.545393 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:33.545398 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:33.545469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:33.571642 1176706 cri.go:89] found id: ""
	I1217 00:58:33.571665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.571673 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:33.571679 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:33.571751 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:33.598928 1176706 cri.go:89] found id: ""
	I1217 00:58:33.598953 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.598961 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:33.598970 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:33.598980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:33.617218 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:33.617237 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:33.681042 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:33.681053 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:33.681064 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:33.750561 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:33.750582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.779618 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:33.779637 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.351872 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:36.361748 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:36.361812 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:36.387484 1176706 cri.go:89] found id: ""
	I1217 00:58:36.387498 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.387505 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:36.387511 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:36.387567 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:36.413880 1176706 cri.go:89] found id: ""
	I1217 00:58:36.413894 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.413902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:36.413922 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:36.413979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:36.439073 1176706 cri.go:89] found id: ""
	I1217 00:58:36.439087 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.439095 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:36.439100 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:36.439159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:36.464148 1176706 cri.go:89] found id: ""
	I1217 00:58:36.464162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.464169 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:36.464175 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:36.464237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:36.489659 1176706 cri.go:89] found id: ""
	I1217 00:58:36.489673 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.489681 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:36.489686 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:36.489744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:36.514865 1176706 cri.go:89] found id: ""
	I1217 00:58:36.514879 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.514887 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:36.514892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:36.514953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:36.545081 1176706 cri.go:89] found id: ""
	I1217 00:58:36.545095 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.545103 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:36.545110 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:36.545120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:36.620571 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:36.620599 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:36.652294 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:36.652313 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.720685 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:36.720708 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:36.738692 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:36.738709 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:36.804409 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.304571 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:39.315407 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:39.315469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:39.343748 1176706 cri.go:89] found id: ""
	I1217 00:58:39.343762 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.343769 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:39.343775 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:39.343834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:39.371633 1176706 cri.go:89] found id: ""
	I1217 00:58:39.371648 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.371655 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:39.371661 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:39.371750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:39.397168 1176706 cri.go:89] found id: ""
	I1217 00:58:39.397183 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.397190 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:39.397196 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:39.397254 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:39.422379 1176706 cri.go:89] found id: ""
	I1217 00:58:39.422393 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.422400 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:39.422406 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:39.422466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:39.451362 1176706 cri.go:89] found id: ""
	I1217 00:58:39.451376 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.451384 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:39.451389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:39.451447 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:39.476838 1176706 cri.go:89] found id: ""
	I1217 00:58:39.476852 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.476862 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:39.476867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:39.476926 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:39.501892 1176706 cri.go:89] found id: ""
	I1217 00:58:39.501905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.501912 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:39.501924 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:39.501933 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:39.571771 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.571783 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:39.571793 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:39.642123 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:39.642144 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:39.673585 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:39.673602 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:39.742217 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:39.742236 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.260825 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:42.274064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:42.274140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:42.315322 1176706 cri.go:89] found id: ""
	I1217 00:58:42.315336 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.315346 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:42.315352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:42.315432 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:42.348891 1176706 cri.go:89] found id: ""
	I1217 00:58:42.348906 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.348914 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:42.348920 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:42.348984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:42.376853 1176706 cri.go:89] found id: ""
	I1217 00:58:42.376867 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.376874 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:42.376880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:42.376940 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:42.402292 1176706 cri.go:89] found id: ""
	I1217 00:58:42.402307 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.402315 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:42.402320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:42.402381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:42.432293 1176706 cri.go:89] found id: ""
	I1217 00:58:42.432306 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.432314 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:42.432319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:42.432378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:42.459173 1176706 cri.go:89] found id: ""
	I1217 00:58:42.459188 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.459195 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:42.459200 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:42.459259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:42.485520 1176706 cri.go:89] found id: ""
	I1217 00:58:42.485534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.485541 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:42.485549 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:42.485562 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:42.553260 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:42.553281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.571244 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:42.571261 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:42.633598 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:42.633609 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:42.633622 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:42.706387 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:42.706408 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.237565 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:45.259348 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:45.259429 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:45.305574 1176706 cri.go:89] found id: ""
	I1217 00:58:45.305589 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.305597 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:45.305602 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:45.305664 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:45.349162 1176706 cri.go:89] found id: ""
	I1217 00:58:45.349177 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.349187 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:45.349192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:45.349256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:45.376828 1176706 cri.go:89] found id: ""
	I1217 00:58:45.376842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.376849 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:45.376855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:45.376915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:45.402470 1176706 cri.go:89] found id: ""
	I1217 00:58:45.402485 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.402492 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:45.402497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:45.402554 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:45.429756 1176706 cri.go:89] found id: ""
	I1217 00:58:45.429790 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.429820 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:45.429842 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:45.429980 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:45.459618 1176706 cri.go:89] found id: ""
	I1217 00:58:45.459632 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.459640 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:45.459647 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:45.459709 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:45.486504 1176706 cri.go:89] found id: ""
	I1217 00:58:45.486518 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.486526 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:45.486533 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:45.486549 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:45.505026 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:45.505044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:45.569592 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:45.569602 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:45.569612 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:45.642249 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:45.642270 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.673783 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:45.673799 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.241441 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:48.253986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:48.254052 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:48.298549 1176706 cri.go:89] found id: ""
	I1217 00:58:48.298562 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.298569 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:48.298575 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:48.298633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:48.329982 1176706 cri.go:89] found id: ""
	I1217 00:58:48.329997 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.330004 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:48.330010 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:48.330068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:48.356278 1176706 cri.go:89] found id: ""
	I1217 00:58:48.356291 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.356298 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:48.356304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:48.356363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:48.381931 1176706 cri.go:89] found id: ""
	I1217 00:58:48.381944 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.381952 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:48.381957 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:48.382012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:48.408076 1176706 cri.go:89] found id: ""
	I1217 00:58:48.408091 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.408098 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:48.408103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:48.408167 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:48.438515 1176706 cri.go:89] found id: ""
	I1217 00:58:48.438529 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.438536 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:48.438542 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:48.438615 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:48.464771 1176706 cri.go:89] found id: ""
	I1217 00:58:48.464784 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.464791 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:48.464800 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:48.464815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.531756 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:48.531777 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:48.550180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:48.550197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:48.614503 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:48.614514 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:48.614524 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:48.683497 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:48.683519 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:51.214024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:51.224516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:51.224581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:51.250104 1176706 cri.go:89] found id: ""
	I1217 00:58:51.250118 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.250125 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:51.250131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:51.250204 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:51.287241 1176706 cri.go:89] found id: ""
	I1217 00:58:51.287255 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.287263 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:51.287268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:51.287334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:51.326285 1176706 cri.go:89] found id: ""
	I1217 00:58:51.326299 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.326306 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:51.326312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:51.326375 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:51.353495 1176706 cri.go:89] found id: ""
	I1217 00:58:51.353509 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.353516 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:51.353521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:51.353577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:51.379404 1176706 cri.go:89] found id: ""
	I1217 00:58:51.379417 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.379425 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:51.379430 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:51.379489 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:51.405891 1176706 cri.go:89] found id: ""
	I1217 00:58:51.405905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.405912 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:51.405919 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:51.405979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:51.431497 1176706 cri.go:89] found id: ""
	I1217 00:58:51.431510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.431529 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:51.431537 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:51.431547 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:51.497786 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:51.497805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:51.516101 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:51.516120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:51.584128 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:51.584139 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:51.584150 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:51.652739 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:51.652760 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:54.182755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:54.194058 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:54.194127 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:54.219806 1176706 cri.go:89] found id: ""
	I1217 00:58:54.219821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.219828 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:54.219833 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:54.219894 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:54.245268 1176706 cri.go:89] found id: ""
	I1217 00:58:54.245281 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.245289 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:54.245294 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:54.245353 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:54.278676 1176706 cri.go:89] found id: ""
	I1217 00:58:54.278690 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.278697 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:54.278703 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:54.278766 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:54.305307 1176706 cri.go:89] found id: ""
	I1217 00:58:54.305321 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.305329 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:54.305334 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:54.305400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:54.330666 1176706 cri.go:89] found id: ""
	I1217 00:58:54.330680 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.330688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:54.330693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:54.330763 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:54.356855 1176706 cri.go:89] found id: ""
	I1217 00:58:54.356875 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.356886 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:54.356892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:54.356985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:54.390389 1176706 cri.go:89] found id: ""
	I1217 00:58:54.390404 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.390411 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:54.390419 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:54.390429 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:54.456633 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:54.456654 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:54.474716 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:54.474734 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:54.542032 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:54.542052 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:54.542063 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:54.614689 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:54.614710 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:57.146377 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:57.156881 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:57.156942 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:57.181785 1176706 cri.go:89] found id: ""
	I1217 00:58:57.181800 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.181808 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:57.181813 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:57.181869 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:57.208021 1176706 cri.go:89] found id: ""
	I1217 00:58:57.208046 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.208059 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:57.208065 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:57.208133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:57.235483 1176706 cri.go:89] found id: ""
	I1217 00:58:57.235497 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.235505 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:57.235510 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:57.235569 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:57.269950 1176706 cri.go:89] found id: ""
	I1217 00:58:57.269972 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.269980 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:57.269986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:57.270063 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:57.296896 1176706 cri.go:89] found id: ""
	I1217 00:58:57.296911 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.296918 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:57.296924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:57.296983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:57.325435 1176706 cri.go:89] found id: ""
	I1217 00:58:57.325452 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.325462 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:57.325468 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:57.325526 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:57.350942 1176706 cri.go:89] found id: ""
	I1217 00:58:57.350957 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.350965 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:57.350973 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:57.350982 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:57.416866 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:57.416886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:57.434717 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:57.434736 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:57.499393 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:57.499403 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:57.499414 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:57.567648 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:57.567668 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:00.097029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:00.143893 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:00.143993 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:00.260647 1176706 cri.go:89] found id: ""
	I1217 00:59:00.262402 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.262438 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:00.262449 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:00.262564 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:00.330718 1176706 cri.go:89] found id: ""
	I1217 00:59:00.330734 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.330745 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:00.330751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:00.330862 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:00.372603 1176706 cri.go:89] found id: ""
	I1217 00:59:00.372630 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.372638 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:00.372645 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:00.372721 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:00.403443 1176706 cri.go:89] found id: ""
	I1217 00:59:00.403469 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.403478 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:00.403484 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:00.403558 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:00.432235 1176706 cri.go:89] found id: ""
	I1217 00:59:00.432260 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.432268 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:00.432274 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:00.432341 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:00.464475 1176706 cri.go:89] found id: ""
	I1217 00:59:00.464489 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.464496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:00.464501 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:00.464563 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:00.494126 1176706 cri.go:89] found id: ""
	I1217 00:59:00.494156 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.494164 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:00.494172 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:00.494182 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:00.564811 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:00.564831 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:00.582720 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:00.582738 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:00.643909 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:00.643921 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:00.643931 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:00.716875 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:00.716895 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.245660 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:03.256968 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:03.257032 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:03.283953 1176706 cri.go:89] found id: ""
	I1217 00:59:03.283968 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.283976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:03.283981 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:03.284041 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:03.313015 1176706 cri.go:89] found id: ""
	I1217 00:59:03.313029 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.313036 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:03.313041 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:03.313098 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:03.341219 1176706 cri.go:89] found id: ""
	I1217 00:59:03.341233 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.341241 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:03.341246 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:03.341304 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:03.366417 1176706 cri.go:89] found id: ""
	I1217 00:59:03.366430 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.366437 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:03.366443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:03.366499 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:03.395548 1176706 cri.go:89] found id: ""
	I1217 00:59:03.395561 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.395568 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:03.395574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:03.395631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:03.425673 1176706 cri.go:89] found id: ""
	I1217 00:59:03.425687 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.425694 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:03.425699 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:03.425758 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:03.452761 1176706 cri.go:89] found id: ""
	I1217 00:59:03.452775 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.452782 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:03.452790 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:03.452813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:03.470985 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:03.471004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:03.539585 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:03.539606 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:03.539617 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:03.608766 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:03.608787 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.641472 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:03.641487 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.214627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:06.225029 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:06.225095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:06.262898 1176706 cri.go:89] found id: ""
	I1217 00:59:06.262912 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.262919 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:06.262924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:06.262979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:06.295811 1176706 cri.go:89] found id: ""
	I1217 00:59:06.295825 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.295832 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:06.295837 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:06.295900 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:06.325305 1176706 cri.go:89] found id: ""
	I1217 00:59:06.325319 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.325326 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:06.325331 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:06.325388 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:06.350976 1176706 cri.go:89] found id: ""
	I1217 00:59:06.350990 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.350997 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:06.351002 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:06.351061 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:06.381013 1176706 cri.go:89] found id: ""
	I1217 00:59:06.381027 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.381034 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:06.381040 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:06.381156 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:06.407543 1176706 cri.go:89] found id: ""
	I1217 00:59:06.407556 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.407564 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:06.407569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:06.407627 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:06.435419 1176706 cri.go:89] found id: ""
	I1217 00:59:06.435433 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.435440 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:06.435448 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:06.435460 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:06.472071 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:06.472098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.540915 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:06.540936 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:06.558800 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:06.558816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:06.626144 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:06.626156 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:06.626167 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.199032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:09.210273 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:09.210345 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:09.238454 1176706 cri.go:89] found id: ""
	I1217 00:59:09.238468 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.238475 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:09.238481 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:09.238539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:09.283355 1176706 cri.go:89] found id: ""
	I1217 00:59:09.283369 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.283377 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:09.283382 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:09.283452 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:09.322894 1176706 cri.go:89] found id: ""
	I1217 00:59:09.322909 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.322917 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:09.322924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:09.322983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:09.349261 1176706 cri.go:89] found id: ""
	I1217 00:59:09.349275 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.349282 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:09.349290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:09.349348 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:09.375365 1176706 cri.go:89] found id: ""
	I1217 00:59:09.375381 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.375390 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:09.375395 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:09.375458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:09.404751 1176706 cri.go:89] found id: ""
	I1217 00:59:09.404765 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.404773 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:09.404778 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:09.404840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:09.430184 1176706 cri.go:89] found id: ""
	I1217 00:59:09.430198 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.430206 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:09.430214 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:09.430224 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:09.496857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:09.496876 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:09.515406 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:09.515423 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:09.581087 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:09.581098 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:09.581109 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.650268 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:09.650288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.181362 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:12.192867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:12.192928 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:12.219737 1176706 cri.go:89] found id: ""
	I1217 00:59:12.219750 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.219757 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:12.219763 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:12.219821 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:12.245063 1176706 cri.go:89] found id: ""
	I1217 00:59:12.245084 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.245091 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:12.245097 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:12.245165 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:12.272131 1176706 cri.go:89] found id: ""
	I1217 00:59:12.272145 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.272152 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:12.272157 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:12.272216 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:12.303997 1176706 cri.go:89] found id: ""
	I1217 00:59:12.304011 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.304018 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:12.304024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:12.304085 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:12.333611 1176706 cri.go:89] found id: ""
	I1217 00:59:12.333624 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.333632 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:12.333637 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:12.333693 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:12.363773 1176706 cri.go:89] found id: ""
	I1217 00:59:12.363789 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.363797 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:12.363802 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:12.363863 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:12.389846 1176706 cri.go:89] found id: ""
	I1217 00:59:12.389861 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.389868 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:12.389875 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:12.389886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:12.407604 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:12.407621 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:12.473182 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:12.473192 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:12.473203 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:12.543348 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:12.543369 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.577767 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:12.577783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.146065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:15.160131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:15.160197 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:15.190612 1176706 cri.go:89] found id: ""
	I1217 00:59:15.190626 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.190634 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:15.190639 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:15.190699 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:15.218099 1176706 cri.go:89] found id: ""
	I1217 00:59:15.218113 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.218121 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:15.218126 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:15.218184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:15.248835 1176706 cri.go:89] found id: ""
	I1217 00:59:15.248848 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.248856 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:15.248861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:15.248918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:15.285228 1176706 cri.go:89] found id: ""
	I1217 00:59:15.285242 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.285250 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:15.285256 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:15.285342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:15.319666 1176706 cri.go:89] found id: ""
	I1217 00:59:15.319684 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.319692 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:15.319697 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:15.319762 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:15.349950 1176706 cri.go:89] found id: ""
	I1217 00:59:15.349964 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.349971 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:15.349985 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:15.350057 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:15.377523 1176706 cri.go:89] found id: ""
	I1217 00:59:15.377539 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.377546 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:15.377553 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:15.377563 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.444971 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:15.444997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:15.463350 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:15.463367 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:15.527808 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:15.527819 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:15.527829 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:15.596798 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:15.596819 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.130677 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:18.141262 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:18.141323 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:18.169116 1176706 cri.go:89] found id: ""
	I1217 00:59:18.169130 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.169138 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:18.169144 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:18.169213 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:18.196282 1176706 cri.go:89] found id: ""
	I1217 00:59:18.196296 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.196303 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:18.196308 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:18.196374 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:18.221983 1176706 cri.go:89] found id: ""
	I1217 00:59:18.222001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.222008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:18.222014 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:18.222104 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:18.253664 1176706 cri.go:89] found id: ""
	I1217 00:59:18.253678 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.253695 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:18.253701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:18.253759 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:18.285902 1176706 cri.go:89] found id: ""
	I1217 00:59:18.285926 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.285935 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:18.285940 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:18.286012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:18.314726 1176706 cri.go:89] found id: ""
	I1217 00:59:18.314740 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.314747 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:18.314762 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:18.314817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:18.344853 1176706 cri.go:89] found id: ""
	I1217 00:59:18.344867 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.344875 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:18.344882 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:18.344904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:18.414538 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:18.414559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.447095 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:18.447111 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:18.512991 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:18.513011 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:18.533994 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:18.534020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:18.598850 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.100519 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:21.110642 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:21.110704 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:21.135662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.135677 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.135684 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:21.135690 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:21.135749 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:21.165495 1176706 cri.go:89] found id: ""
	I1217 00:59:21.165508 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.165515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:21.165522 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:21.165581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:21.190194 1176706 cri.go:89] found id: ""
	I1217 00:59:21.190216 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.190224 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:21.190229 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:21.190286 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:21.215635 1176706 cri.go:89] found id: ""
	I1217 00:59:21.215658 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.215668 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:21.215674 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:21.215741 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:21.240901 1176706 cri.go:89] found id: ""
	I1217 00:59:21.240915 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.240922 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:21.240928 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:21.240985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:21.282662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.282676 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.282683 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:21.282689 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:21.282747 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:21.318909 1176706 cri.go:89] found id: ""
	I1217 00:59:21.318937 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.318946 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:21.318955 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:21.318981 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:21.389438 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:21.389459 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:21.407933 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:21.407951 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:21.470948 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.470958 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:21.470970 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:21.543202 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:21.543223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:24.074213 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:24.084903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:24.084967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:24.111192 1176706 cri.go:89] found id: ""
	I1217 00:59:24.111207 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.111214 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:24.111221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:24.111280 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:24.137550 1176706 cri.go:89] found id: ""
	I1217 00:59:24.137564 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.137572 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:24.137577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:24.137638 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:24.163576 1176706 cri.go:89] found id: ""
	I1217 00:59:24.163590 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.163598 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:24.163603 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:24.163661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:24.191365 1176706 cri.go:89] found id: ""
	I1217 00:59:24.191379 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.191386 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:24.191391 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:24.191451 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:24.218021 1176706 cri.go:89] found id: ""
	I1217 00:59:24.218036 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.218043 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:24.218048 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:24.218109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:24.243066 1176706 cri.go:89] found id: ""
	I1217 00:59:24.243079 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.243086 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:24.243092 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:24.243150 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:24.273424 1176706 cri.go:89] found id: ""
	I1217 00:59:24.273438 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.273446 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:24.273453 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:24.273468 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:24.352524 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:24.352545 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:24.370425 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:24.370445 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:24.435871 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:24.435881 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:24.435896 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:24.504929 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:24.504949 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.033266 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:27.043460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:27.043521 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:27.068665 1176706 cri.go:89] found id: ""
	I1217 00:59:27.068679 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.068686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:27.068698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:27.068754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:27.094007 1176706 cri.go:89] found id: ""
	I1217 00:59:27.094021 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.094028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:27.094033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:27.094092 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:27.118910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.118923 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.118931 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:27.118936 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:27.118994 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:27.147303 1176706 cri.go:89] found id: ""
	I1217 00:59:27.147317 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.147324 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:27.147330 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:27.147386 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:27.172343 1176706 cri.go:89] found id: ""
	I1217 00:59:27.172357 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.172365 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:27.172370 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:27.172458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:27.197910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.197924 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.197932 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:27.197938 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:27.198001 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:27.227576 1176706 cri.go:89] found id: ""
	I1217 00:59:27.227591 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.227598 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:27.227606 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:27.227618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:27.311005 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:27.311016 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:27.311026 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:27.382732 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:27.382752 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.415820 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:27.415836 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:27.482903 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:27.482926 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.004621 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:30.030664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:30.030745 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:30.081469 1176706 cri.go:89] found id: ""
	I1217 00:59:30.081485 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.081493 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:30.081499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:30.081566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:30.113916 1176706 cri.go:89] found id: ""
	I1217 00:59:30.113931 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.113939 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:30.113946 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:30.114011 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:30.145424 1176706 cri.go:89] found id: ""
	I1217 00:59:30.145439 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.145447 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:30.145453 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:30.145519 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:30.172979 1176706 cri.go:89] found id: ""
	I1217 00:59:30.172993 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.173000 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:30.173006 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:30.173068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:30.203666 1176706 cri.go:89] found id: ""
	I1217 00:59:30.203680 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.203688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:30.203693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:30.203754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:30.230252 1176706 cri.go:89] found id: ""
	I1217 00:59:30.230266 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.230274 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:30.230280 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:30.230346 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:30.263265 1176706 cri.go:89] found id: ""
	I1217 00:59:30.263288 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.263297 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:30.263305 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:30.263317 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.285817 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:30.285833 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:30.357587 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:30.357597 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:30.357609 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:30.426496 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:30.426518 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:30.455371 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:30.455387 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:33.025588 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:33.037063 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:33.037133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:33.066495 1176706 cri.go:89] found id: ""
	I1217 00:59:33.066510 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.066518 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:33.066531 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:33.066593 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:33.094203 1176706 cri.go:89] found id: ""
	I1217 00:59:33.094218 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.094225 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:33.094230 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:33.094289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:33.121048 1176706 cri.go:89] found id: ""
	I1217 00:59:33.121062 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.121070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:33.121076 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:33.121137 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:33.148530 1176706 cri.go:89] found id: ""
	I1217 00:59:33.148559 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.148568 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:33.148574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:33.148647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:33.175802 1176706 cri.go:89] found id: ""
	I1217 00:59:33.175816 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.175823 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:33.175829 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:33.175892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:33.206535 1176706 cri.go:89] found id: ""
	I1217 00:59:33.206548 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.206556 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:33.206562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:33.206623 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:33.236039 1176706 cri.go:89] found id: ""
	I1217 00:59:33.236052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.236060 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:33.236068 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:33.236078 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:33.255180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:33.255197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:33.339098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:33.339108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:33.339121 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:33.412971 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:33.412997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:33.441676 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:33.441694 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.008647 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:36.020237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:36.020301 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:36.052600 1176706 cri.go:89] found id: ""
	I1217 00:59:36.052616 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.052623 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:36.052629 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:36.052692 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:36.081744 1176706 cri.go:89] found id: ""
	I1217 00:59:36.081759 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.081768 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:36.081773 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:36.081841 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:36.109987 1176706 cri.go:89] found id: ""
	I1217 00:59:36.110001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.110008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:36.110013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:36.110077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:36.134954 1176706 cri.go:89] found id: ""
	I1217 00:59:36.134967 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.134975 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:36.134980 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:36.135037 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:36.159862 1176706 cri.go:89] found id: ""
	I1217 00:59:36.159876 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.159884 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:36.159889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:36.159947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:36.187809 1176706 cri.go:89] found id: ""
	I1217 00:59:36.187822 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.187829 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:36.187835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:36.187904 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:36.214242 1176706 cri.go:89] found id: ""
	I1217 00:59:36.214257 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.214264 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:36.214272 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:36.214283 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.286225 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:36.286244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:36.305628 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:36.305646 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:36.371158 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:36.371170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:36.371181 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:36.439045 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:36.439065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:38.969106 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:38.979363 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:38.979424 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:39.007795 1176706 cri.go:89] found id: ""
	I1217 00:59:39.007810 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.007818 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:39.007824 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:39.007888 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:39.034152 1176706 cri.go:89] found id: ""
	I1217 00:59:39.034166 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.034173 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:39.034179 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:39.034238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:39.059914 1176706 cri.go:89] found id: ""
	I1217 00:59:39.059928 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.059935 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:39.059941 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:39.060002 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:39.085320 1176706 cri.go:89] found id: ""
	I1217 00:59:39.085334 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.085341 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:39.085349 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:39.085405 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:39.110285 1176706 cri.go:89] found id: ""
	I1217 00:59:39.110298 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.110306 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:39.110311 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:39.110372 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:39.135035 1176706 cri.go:89] found id: ""
	I1217 00:59:39.135058 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.135066 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:39.135072 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:39.135139 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:39.159817 1176706 cri.go:89] found id: ""
	I1217 00:59:39.159830 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.159848 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:39.159857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:39.159872 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:39.177791 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:39.177809 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:39.249533 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:39.249543 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:39.249552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:39.325557 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:39.325577 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:39.360066 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:39.360085 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:41.928404 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:41.938632 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:41.938696 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:41.964031 1176706 cri.go:89] found id: ""
	I1217 00:59:41.964052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.964059 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:41.964064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:41.964122 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:41.993062 1176706 cri.go:89] found id: ""
	I1217 00:59:41.993076 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.993084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:41.993089 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:41.993160 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:42.033652 1176706 cri.go:89] found id: ""
	I1217 00:59:42.033667 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.033676 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:42.033681 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:42.033746 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:42.060629 1176706 cri.go:89] found id: ""
	I1217 00:59:42.060645 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.060653 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:42.060659 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:42.060722 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:42.092817 1176706 cri.go:89] found id: ""
	I1217 00:59:42.092845 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.092853 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:42.092868 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:42.092941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:42.136486 1176706 cri.go:89] found id: ""
	I1217 00:59:42.136506 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.136515 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:42.136521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:42.136592 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:42.171937 1176706 cri.go:89] found id: ""
	I1217 00:59:42.171952 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.171959 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:42.171967 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:42.171979 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:42.262695 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:42.262707 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:42.262718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:42.339199 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:42.339220 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:42.372997 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:42.373025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:42.446036 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:42.446055 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:44.965013 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:44.976094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:44.976161 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:45.015164 1176706 cri.go:89] found id: ""
	I1217 00:59:45.015181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.015189 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:45.015195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:45.015272 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:45.071613 1176706 cri.go:89] found id: ""
	I1217 00:59:45.071635 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.071643 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:45.071649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:45.071715 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:45.119793 1176706 cri.go:89] found id: ""
	I1217 00:59:45.119818 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.119826 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:45.119839 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:45.119914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:45.151783 1176706 cri.go:89] found id: ""
	I1217 00:59:45.151800 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.151808 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:45.151814 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:45.151892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:45.215691 1176706 cri.go:89] found id: ""
	I1217 00:59:45.215708 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.215717 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:45.215723 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:45.215788 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:45.307588 1176706 cri.go:89] found id: ""
	I1217 00:59:45.307603 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.307612 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:45.307617 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:45.307686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:45.338241 1176706 cri.go:89] found id: ""
	I1217 00:59:45.338255 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.338262 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:45.338270 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:45.338281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:45.369988 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:45.370005 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:45.441693 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:45.441715 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:45.461548 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:45.461567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:45.548353 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:45.548363 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:45.548374 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.120029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:48.130460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:48.130527 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:48.158049 1176706 cri.go:89] found id: ""
	I1217 00:59:48.158063 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.158070 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:48.158075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:48.158133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:48.183768 1176706 cri.go:89] found id: ""
	I1217 00:59:48.183782 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.183790 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:48.183795 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:48.183853 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:48.209858 1176706 cri.go:89] found id: ""
	I1217 00:59:48.209883 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.209891 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:48.209897 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:48.209969 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:48.239433 1176706 cri.go:89] found id: ""
	I1217 00:59:48.239447 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.239464 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:48.239470 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:48.239546 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:48.283289 1176706 cri.go:89] found id: ""
	I1217 00:59:48.283312 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.283320 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:48.283325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:48.283401 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:48.314402 1176706 cri.go:89] found id: ""
	I1217 00:59:48.314429 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.314437 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:48.314443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:48.314511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:48.343692 1176706 cri.go:89] found id: ""
	I1217 00:59:48.343706 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.343727 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:48.343735 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:48.343745 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:48.362542 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:48.362560 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:48.427994 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:48.428004 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:48.428016 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.499539 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:48.499559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:48.531009 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:48.531025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.098220 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:51.109265 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:51.109331 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:51.136197 1176706 cri.go:89] found id: ""
	I1217 00:59:51.136213 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.136221 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:51.136227 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:51.136287 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:51.163078 1176706 cri.go:89] found id: ""
	I1217 00:59:51.163092 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.163100 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:51.163105 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:51.163172 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:51.191839 1176706 cri.go:89] found id: ""
	I1217 00:59:51.191853 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.191861 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:51.191866 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:51.191949 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:51.218098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.218116 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.218124 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:51.218130 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:51.218211 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:51.243098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.243112 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.243120 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:51.243125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:51.243191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:51.271566 1176706 cri.go:89] found id: ""
	I1217 00:59:51.271579 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.271586 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:51.271591 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:51.271647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:51.305158 1176706 cri.go:89] found id: ""
	I1217 00:59:51.305181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.305187 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:51.305196 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:51.305207 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.376352 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:51.376373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:51.394410 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:51.394427 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:51.459231 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:51.459240 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:51.459251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:51.528231 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:51.528252 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:54.058312 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:54.069031 1176706 kubeadm.go:602] duration metric: took 4m2.785263609s to restartPrimaryControlPlane
	W1217 00:59:54.069095 1176706 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:59:54.069181 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 00:59:54.486154 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:59:54.499356 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:59:54.507725 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:59:54.507779 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:59:54.515997 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:59:54.516007 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 00:59:54.516064 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:59:54.524157 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:59:54.524213 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:59:54.532265 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:59:54.540638 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:59:54.540707 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:59:54.548269 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.556326 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:59:54.556388 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.564545 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:59:54.572682 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:59:54.572738 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:59:54.580611 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:59:54.700281 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:59:54.700747 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:59:54.763643 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:03:56.152758 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:03:56.152795 1176706 kubeadm.go:319] 
	I1217 01:03:56.152869 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:03:56.156728 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.156797 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.156958 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.157014 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.157073 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.157118 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.157197 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.157253 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.157300 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.157352 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.157400 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.157453 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.157508 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.157553 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.157624 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.157727 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.157824 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.157884 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.160971 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.161055 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.161118 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.161193 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.161252 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.161327 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.161379 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.161441 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.161501 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.161574 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.161645 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.161681 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.161741 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:56.161790 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:56.161845 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:56.161896 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:56.161957 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:56.162010 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:56.162092 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:56.162157 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:56.165021 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:56.165147 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:56.165231 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:56.165300 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:56.165418 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:56.165512 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:56.165614 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:56.165696 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:56.165733 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:56.165861 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:56.165963 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:03:56.166026 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240228s
	I1217 01:03:56.166028 1176706 kubeadm.go:319] 
	I1217 01:03:56.166083 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:03:56.166114 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:03:56.166215 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:03:56.166218 1176706 kubeadm.go:319] 
	I1217 01:03:56.166320 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:03:56.166351 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:03:56.166380 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:03:56.166487 1176706 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240228s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:03:56.166580 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 01:03:56.166903 1176706 kubeadm.go:319] 
	I1217 01:03:56.586040 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:03:56.599481 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:03:56.599536 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:03:56.607687 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:03:56.607697 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 01:03:56.607750 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:03:56.615588 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:03:56.615644 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:03:56.623820 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:03:56.631817 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:03:56.631875 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:03:56.639771 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.647723 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:03:56.647784 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.655274 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:03:56.662953 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:03:56.663009 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:03:56.671031 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:03:56.709331 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.709382 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.784528 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.784593 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.784627 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.784671 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.784718 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.784764 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.784811 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.784857 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.784907 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.784950 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.784997 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.785046 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.852730 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.852846 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.852941 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.864882 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.870169 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.870260 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.870331 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.870414 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.870480 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.870560 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.870623 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.870698 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.870772 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.870857 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.870939 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.870985 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.871053 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:57.081118 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:57.308024 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:57.795688 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:58.747783 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:59.056308 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:59.056908 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:59.061460 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:59.064667 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:59.064766 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:59.064843 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:59.064909 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:59.079437 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:59.079539 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:59.087425 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:59.087990 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:59.088228 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:59.232706 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:59.232823 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:07:59.232882 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288911s
	I1217 01:07:59.232905 1176706 kubeadm.go:319] 
	I1217 01:07:59.232961 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:07:59.232994 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:07:59.233119 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:07:59.233124 1176706 kubeadm.go:319] 
	I1217 01:07:59.233227 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:07:59.233261 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:07:59.233291 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:07:59.233294 1176706 kubeadm.go:319] 
	I1217 01:07:59.237945 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:07:59.238359 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:07:59.238466 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:07:59.238699 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:07:59.238704 1176706 kubeadm.go:319] 
	I1217 01:07:59.238771 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:07:59.238833 1176706 kubeadm.go:403] duration metric: took 12m7.995613678s to StartCluster
	I1217 01:07:59.238862 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:07:59.238924 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:07:59.265092 1176706 cri.go:89] found id: ""
	I1217 01:07:59.265110 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.265118 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:07:59.265124 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:07:59.265190 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:07:59.289869 1176706 cri.go:89] found id: ""
	I1217 01:07:59.289884 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.289891 1176706 logs.go:284] No container was found matching "etcd"
	I1217 01:07:59.289896 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:07:59.289954 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:07:59.315177 1176706 cri.go:89] found id: ""
	I1217 01:07:59.315192 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.315200 1176706 logs.go:284] No container was found matching "coredns"
	I1217 01:07:59.315206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:07:59.315267 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:07:59.343402 1176706 cri.go:89] found id: ""
	I1217 01:07:59.343422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.343429 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:07:59.343435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:07:59.343492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:07:59.369351 1176706 cri.go:89] found id: ""
	I1217 01:07:59.369367 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.369375 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:07:59.369381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:07:59.369446 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:07:59.395407 1176706 cri.go:89] found id: ""
	I1217 01:07:59.395422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.395430 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:07:59.395436 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:07:59.395497 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:07:59.425527 1176706 cri.go:89] found id: ""
	I1217 01:07:59.425542 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.425549 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 01:07:59.425557 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:07:59.425567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:07:59.496396 1176706 logs.go:123] Gathering logs for container status ...
	I1217 01:07:59.496422 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:07:59.529365 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 01:07:59.529381 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:07:59.607059 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 01:07:59.607079 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:07:59.625460 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:07:59.625476 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:07:59.694111 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 01:07:59.694128 1176706 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:07:59.694160 1176706 out.go:285] * 
	W1217 01:07:59.696578 1176706 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.696718 1176706 out.go:285] * 
	W1217 01:07:59.699147 1176706 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:07:59.705064 1176706 out.go:203] 
	W1217 01:07:59.708024 1176706 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.708074 1176706 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:07:59.708093 1176706 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:07:59.711386 1176706 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.856470274Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ef116d89-326a-4264-be1a-c1a1c61f856f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.85716241Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=48ae23b1-9237-4abe-8586-a22789c1855d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.857752633Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=3cdbc308-65b6-45fa-9f9e-f10e79119ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858320825Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3d72515c-27e8-4599-9a3a-55c1e786e2d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858852571Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df55df6f-24f3-440d-9630-435b19250644 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859434761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=76977bf3-dbf1-4740-ab7e-261b44d6cbc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859913322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3a88b64b-7c2e-4efa-a683-a7222714b1da name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682372585Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68256814Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68261275Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682675452Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711610422Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711759996Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711798871Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739279084Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739445118Z" level=info msg="Image localhost/kicbase/echo-server:functional-389537 not found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739495176Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-389537 found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732782966Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732960388Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733031024Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733098123Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765602567Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765759674Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765805293Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.806741123Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=54276271-8e2f-42ec-a439-ea95344609a5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:10:09.311123   23592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:09.311713   23592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:09.313216   23592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:09.313621   23592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:10:09.315367   23592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:10:09 up  6:52,  0 user,  load average: 0.25, 0.32, 0.46
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:10:06 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:07 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2290.
	Dec 17 01:10:07 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:07 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:07 functional-389537 kubelet[23478]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:07 functional-389537 kubelet[23478]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:07 functional-389537 kubelet[23478]: E1217 01:10:07.308630   23478 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:07 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:07 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:07 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2291.
	Dec 17 01:10:07 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:08 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:08 functional-389537 kubelet[23483]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:08 functional-389537 kubelet[23483]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:08 functional-389537 kubelet[23483]: E1217 01:10:08.066419   23483 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:08 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:08 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:10:08 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2292.
	Dec 17 01:10:08 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:08 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:10:08 functional-389537 kubelet[23504]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:08 functional-389537 kubelet[23504]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:10:08 functional-389537 kubelet[23504]: E1217 01:10:08.814459   23504 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:10:08 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:10:08 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (369.566717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:08:29.365808 1136597 retry.go:31] will retry after 5.34472931s: Temporary Error: Get "http://10.99.120.4": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:08:44.711272 1136597 retry.go:31] will retry after 6.41115367s: Temporary Error: Get "http://10.99.120.4": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:09:01.123819 1136597 retry.go:31] will retry after 9.260729639s: Temporary Error: Get "http://10.99.120.4": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:09:20.386014 1136597 retry.go:31] will retry after 8.499401252s: Temporary Error: Get "http://10.99.120.4": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1217 01:09:38.887599 1136597 retry.go:31] will retry after 18.675439583s: Temporary Error: Get "http://10.99.120.4": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 01:09:48.432971 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1217 01:11:45.354335 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (316.073265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (309.450332ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-389537 ssh findmnt -T /mount1                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh            │ functional-389537 ssh findmnt -T /mount2                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ ssh            │ functional-389537 ssh findmnt -T /mount3                                                                                                      │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ mount          │ -p functional-389537 --kill=true                                                                                                              │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ addons         │ functional-389537 addons list                                                                                                                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ addons         │ functional-389537 addons list -o json                                                                                                         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ service        │ functional-389537 service list                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service        │ functional-389537 service list -o json                                                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service        │ functional-389537 service --namespace=default --https --url hello-node                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service        │ functional-389537 service hello-node --url --format={{.IP}}                                                                                   │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ service        │ functional-389537 service hello-node --url                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start          │ -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start          │ -p functional-389537 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ start          │ -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-389537 --alsologtostderr -v=1                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ image          │ functional-389537 image ls --format short --alsologtostderr                                                                                   │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ image          │ functional-389537 image ls --format yaml --alsologtostderr                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ ssh            │ functional-389537 ssh pgrep buildkitd                                                                                                         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │                     │
	│ image          │ functional-389537 image build -t localhost/my-image:functional-389537 testdata/build --alsologtostderr                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ image          │ functional-389537 image ls                                                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ image          │ functional-389537 image ls --format json --alsologtostderr                                                                                    │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ image          │ functional-389537 image ls --format table --alsologtostderr                                                                                   │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ update-context │ functional-389537 update-context --alsologtostderr -v=2                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ update-context │ functional-389537 update-context --alsologtostderr -v=2                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	│ update-context │ functional-389537 update-context --alsologtostderr -v=2                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:10 UTC │ 17 Dec 25 01:10 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:10:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:10:18.049509 1195605 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:10:18.049733 1195605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:18.049764 1195605 out.go:374] Setting ErrFile to fd 2...
	I1217 01:10:18.049784 1195605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:18.050297 1195605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:10:18.050762 1195605 out.go:368] Setting JSON to false
	I1217 01:10:18.051720 1195605 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24768,"bootTime":1765909050,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:10:18.051822 1195605 start.go:143] virtualization:  
	I1217 01:10:18.056956 1195605 out.go:179] * [functional-389537] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 01:10:18.059947 1195605 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:10:18.060044 1195605 notify.go:221] Checking for updates...
	I1217 01:10:18.065731 1195605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:10:18.068771 1195605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:10:18.071703 1195605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:10:18.074678 1195605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:10:18.077595 1195605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:10:18.081023 1195605 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 01:10:18.081621 1195605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:10:18.120581 1195605 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:10:18.120797 1195605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:10:18.188706 1195605 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:10:18.178636988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:10:18.188812 1195605 docker.go:319] overlay module found
	I1217 01:10:18.191969 1195605 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 01:10:18.194813 1195605 start.go:309] selected driver: docker
	I1217 01:10:18.194851 1195605 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:10:18.194963 1195605 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:10:18.198653 1195605 out.go:203] 
	W1217 01:10:18.201645 1195605 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 01:10:18.204565 1195605 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.856470274Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ef116d89-326a-4264-be1a-c1a1c61f856f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.85716241Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=48ae23b1-9237-4abe-8586-a22789c1855d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.857752633Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=3cdbc308-65b6-45fa-9f9e-f10e79119ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858320825Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3d72515c-27e8-4599-9a3a-55c1e786e2d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858852571Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df55df6f-24f3-440d-9630-435b19250644 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859434761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=76977bf3-dbf1-4740-ab7e-261b44d6cbc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859913322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3a88b64b-7c2e-4efa-a683-a7222714b1da name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682372585Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68256814Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68261275Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682675452Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711610422Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711759996Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711798871Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739279084Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739445118Z" level=info msg="Image localhost/kicbase/echo-server:functional-389537 not found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739495176Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-389537 found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732782966Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732960388Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733031024Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733098123Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765602567Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765759674Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765805293Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.806741123Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=54276271-8e2f-42ec-a439-ea95344609a5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:12:27.065228   25638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:12:27.066061   25638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:12:27.067711   25638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:12:27.068046   25638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:12:27.069588   25638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:12:27 up  6:54,  0 user,  load average: 0.61, 0.43, 0.48
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:12:24 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:12:25 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2474.
	Dec 17 01:12:25 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:12:25 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:12:25 functional-389537 kubelet[25513]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:12:25 functional-389537 kubelet[25513]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:12:25 functional-389537 kubelet[25513]: E1217 01:12:25.299204   25513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:12:25 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:12:25 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:12:25 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2475.
	Dec 17 01:12:25 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:12:26 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:12:26 functional-389537 kubelet[25525]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:12:26 functional-389537 kubelet[25525]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:12:26 functional-389537 kubelet[25525]: E1217 01:12:26.068052   25525 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:12:26 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:12:26 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:12:26 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2476.
	Dec 17 01:12:26 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:12:26 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:12:26 functional-389537 kubelet[25567]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:12:26 functional-389537 kubelet[25567]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:12:26 functional-389537 kubelet[25567]: E1217 01:12:26.815744   25567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:12:26 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:12:26 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (295.685113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-389537 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-389537 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (68.313715ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-389537 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-389537
helpers_test.go:244: (dbg) docker inspect functional-389537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	        "Created": "2025-12-17T00:41:06.097242016Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1165271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:41:06.169334494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/hosts",
	        "LogPath": "/var/lib/docker/containers/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28/74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28-json.log",
	        "Name": "/functional-389537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-389537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-389537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74a69b8365e264083d107ba384141f35edc58177d29436c09ac9c728f770ef28",
	                "LowerDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3853b8cfe72cff08f35c166ecef823cd6974e229c872a494c4e87ce6fd7101a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-389537",
	                "Source": "/var/lib/docker/volumes/functional-389537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-389537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-389537",
	                "name.minikube.sigs.k8s.io": "functional-389537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84f7cd01e57631208054fc30855b5ce3565646c2242e838d7b1dcf94e8598664",
	            "SandboxKey": "/var/run/docker/netns/84f7cd01e576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-389537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:3a:33:49:33:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14973b3b0f3eb5c0249ccbe411606f26da2b0c88fd109a1ba1e3feb37cc7f0d3",
	                    "EndpointID": "f1336a895143cac8f8d060fe58f09f12b199bc0886e1d40a9a5c27060d01a6ff",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-389537",
	                        "74a69b8365e2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389537 -n functional-389537: exit status 2 (307.586455ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p functional-389537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                  │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ config  │ functional-389537 config unset cpus                                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ license │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ config  │ functional-389537 config get cpus                                                                                                                         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ config  │ functional-389537 config set cpus 2                                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ config  │ functional-389537 config get cpus                                                                                                                         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ config  │ functional-389537 config unset cpus                                                                                                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ config  │ functional-389537 config get cpus                                                                                                                         │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ tunnel  │ functional-389537 tunnel --alsologtostderr                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ tunnel  │ functional-389537 tunnel --alsologtostderr                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh sudo systemctl is-active docker                                                                                                     │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ ssh     │ functional-389537 ssh sudo systemctl is-active containerd                                                                                                 │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ tunnel  │ functional-389537 tunnel --alsologtostderr                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │                     │
	│ image   │ functional-389537 image load --daemon kicbase/echo-server:functional-389537 --alsologtostderr                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image ls                                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image load --daemon kicbase/echo-server:functional-389537 --alsologtostderr                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image ls                                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image load --daemon kicbase/echo-server:functional-389537 --alsologtostderr                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image ls                                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image save kicbase/echo-server:functional-389537 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image rm kicbase/echo-server:functional-389537 --alsologtostderr                                                                        │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image ls                                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image ls                                                                                                                                │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	│ image   │ functional-389537 image save --daemon kicbase/echo-server:functional-389537 --alsologtostderr                                                             │ functional-389537 │ jenkins │ v1.37.0 │ 17 Dec 25 01:08 UTC │ 17 Dec 25 01:08 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:55:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:55:46.994785 1176706 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:55:46.994905 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.994909 1176706 out.go:374] Setting ErrFile to fd 2...
	I1217 00:55:46.994912 1176706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:55:46.995145 1176706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:55:46.995485 1176706 out.go:368] Setting JSON to false
	I1217 00:55:46.996300 1176706 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23897,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:55:46.996353 1176706 start.go:143] virtualization:  
	I1217 00:55:46.999868 1176706 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:55:47.003126 1176706 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:55:47.003469 1176706 notify.go:221] Checking for updates...
	I1217 00:55:47.009985 1176706 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:55:47.012797 1176706 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:55:47.015597 1176706 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:55:47.018366 1176706 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:55:47.021294 1176706 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:55:47.024608 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:47.024710 1176706 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:55:47.058976 1176706 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:55:47.059096 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.117622 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.107831529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.117708 1176706 docker.go:319] overlay module found
	I1217 00:55:47.120741 1176706 out.go:179] * Using the docker driver based on existing profile
	I1217 00:55:47.123563 1176706 start.go:309] selected driver: docker
	I1217 00:55:47.123570 1176706 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.123673 1176706 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:55:47.123773 1176706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:55:47.174997 1176706 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-17 00:55:47.166206706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:55:47.175382 1176706 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:55:47.175411 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:47.175464 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:47.175503 1176706 start.go:353] cluster config:
	{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:47.182544 1176706 out.go:179] * Starting "functional-389537" primary control-plane node in "functional-389537" cluster
	I1217 00:55:47.185443 1176706 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:55:47.188263 1176706 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:55:47.191087 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:47.191140 1176706 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:55:47.191147 1176706 cache.go:65] Caching tarball of preloaded images
	I1217 00:55:47.191162 1176706 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:55:47.191229 1176706 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 00:55:47.191238 1176706 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:55:47.191343 1176706 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/config.json ...
	I1217 00:55:47.210444 1176706 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:55:47.210456 1176706 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:55:47.210476 1176706 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:55:47.210509 1176706 start.go:360] acquireMachinesLock for functional-389537: {Name:mk17ed50665c6c336540943e42c985fe48aca5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:55:47.210571 1176706 start.go:364] duration metric: took 45.496µs to acquireMachinesLock for "functional-389537"
	I1217 00:55:47.210589 1176706 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:55:47.210598 1176706 fix.go:54] fixHost starting: 
	I1217 00:55:47.210865 1176706 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
	I1217 00:55:47.227344 1176706 fix.go:112] recreateIfNeeded on functional-389537: state=Running err=<nil>
	W1217 00:55:47.227372 1176706 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:55:47.230529 1176706 out.go:252] * Updating the running docker "functional-389537" container ...
	I1217 00:55:47.230551 1176706 machine.go:94] provisionDockerMachine start ...
	I1217 00:55:47.230646 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.247199 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.247509 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.247515 1176706 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:55:47.376058 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.376078 1176706 ubuntu.go:182] provisioning hostname "functional-389537"
	I1217 00:55:47.376140 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.394017 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.394338 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.394346 1176706 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-389537 && echo "functional-389537" | sudo tee /etc/hostname
	I1217 00:55:47.541042 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389537
	
	I1217 00:55:47.541113 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.567770 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:47.568067 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:47.568081 1176706 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-389537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389537/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-389537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:55:47.696783 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:55:47.696798 1176706 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 00:55:47.696826 1176706 ubuntu.go:190] setting up certificates
	I1217 00:55:47.696844 1176706 provision.go:84] configureAuth start
	I1217 00:55:47.696911 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:47.715433 1176706 provision.go:143] copyHostCerts
	I1217 00:55:47.715503 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 00:55:47.715510 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 00:55:47.715589 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 00:55:47.715698 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 00:55:47.715703 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 00:55:47.715729 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 00:55:47.715793 1176706 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 00:55:47.715796 1176706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 00:55:47.715819 1176706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 00:55:47.715916 1176706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.functional-389537 san=[127.0.0.1 192.168.49.2 functional-389537 localhost minikube]
	I1217 00:55:47.936144 1176706 provision.go:177] copyRemoteCerts
	I1217 00:55:47.936198 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:55:47.936245 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:47.956022 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.053167 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 00:55:48.072266 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:55:48.091659 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:55:48.111240 1176706 provision.go:87] duration metric: took 414.372164ms to configureAuth
	I1217 00:55:48.111259 1176706 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:55:48.111463 1176706 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:55:48.111573 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.130165 1176706 main.go:143] libmachine: Using SSH client type: native
	I1217 00:55:48.130471 1176706 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33908 <nil> <nil>}
	I1217 00:55:48.130482 1176706 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:55:48.471522 1176706 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:55:48.471533 1176706 machine.go:97] duration metric: took 1.240975938s to provisionDockerMachine
	I1217 00:55:48.471544 1176706 start.go:293] postStartSetup for "functional-389537" (driver="docker")
	I1217 00:55:48.471555 1176706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:55:48.471613 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:55:48.471661 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.490121 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.584735 1176706 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:55:48.588097 1176706 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:55:48.588115 1176706 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:55:48.588125 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 00:55:48.588181 1176706 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 00:55:48.588263 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 00:55:48.588334 1176706 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts -> hosts in /etc/test/nested/copy/1136597
	I1217 00:55:48.588376 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1136597
	I1217 00:55:48.596032 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:48.613682 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts --> /etc/test/nested/copy/1136597/hosts (40 bytes)
	I1217 00:55:48.631217 1176706 start.go:296] duration metric: took 159.660022ms for postStartSetup
	I1217 00:55:48.631287 1176706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:55:48.631323 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.648559 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.741603 1176706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:55:48.746366 1176706 fix.go:56] duration metric: took 1.535755013s for fixHost
	I1217 00:55:48.746384 1176706 start.go:83] releasing machines lock for "functional-389537", held for 1.535804694s
	I1217 00:55:48.746455 1176706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389537
	I1217 00:55:48.763224 1176706 ssh_runner.go:195] Run: cat /version.json
	I1217 00:55:48.763430 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.763750 1176706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:55:48.763808 1176706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
	I1217 00:55:48.786426 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.786940 1176706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
	I1217 00:55:48.880624 1176706 ssh_runner.go:195] Run: systemctl --version
	I1217 00:55:48.974663 1176706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:55:49.027409 1176706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:55:49.032432 1176706 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:55:49.032491 1176706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:55:49.041183 1176706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:55:49.041196 1176706 start.go:496] detecting cgroup driver to use...
	I1217 00:55:49.041228 1176706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:55:49.041278 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:55:49.058264 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:55:49.077295 1176706 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:55:49.077360 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:55:49.093971 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:55:49.107900 1176706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:55:49.227935 1176706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:55:49.348723 1176706 docker.go:234] disabling docker service ...
	I1217 00:55:49.348791 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:55:49.364370 1176706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:55:49.377769 1176706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:55:49.508111 1176706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:55:49.633558 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:55:49.646587 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:55:49.660861 1176706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:55:49.660916 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.670006 1176706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:55:49.670064 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.678812 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.687975 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.697006 1176706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:55:49.705500 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.714719 1176706 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.723320 1176706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:55:49.732206 1176706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:55:49.740020 1176706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:55:49.747555 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:49.895105 1176706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:55:50.085156 1176706 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:55:50.085220 1176706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:55:50.089378 1176706 start.go:564] Will wait 60s for crictl version
	I1217 00:55:50.089440 1176706 ssh_runner.go:195] Run: which crictl
	I1217 00:55:50.093400 1176706 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:55:50.123005 1176706 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 00:55:50.123090 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.155928 1176706 ssh_runner.go:195] Run: crio --version
	I1217 00:55:50.190668 1176706 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 00:55:50.193712 1176706 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:55:50.210245 1176706 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 00:55:50.217339 1176706 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:55:50.220306 1176706 kubeadm.go:884] updating cluster {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:55:50.220479 1176706 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:55:50.220549 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.261117 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.261129 1176706 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:55:50.261188 1176706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:55:50.288200 1176706 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:55:50.288211 1176706 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:55:50.288217 1176706 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:55:50.288323 1176706 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-389537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:55:50.288468 1176706 ssh_runner.go:195] Run: crio config
	I1217 00:55:50.348160 1176706 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:55:50.348190 1176706 cni.go:84] Creating CNI manager for ""
	I1217 00:55:50.348199 1176706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:55:50.348212 1176706 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:55:50.348234 1176706 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389537 NodeName:functional-389537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:55:50.348361 1176706 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-389537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:55:50.348453 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:55:50.356478 1176706 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:55:50.356555 1176706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:55:50.364296 1176706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1217 00:55:50.378459 1176706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:55:50.391769 1176706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1217 00:55:50.404843 1176706 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:55:50.408803 1176706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:55:50.530281 1176706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:55:50.553453 1176706 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537 for IP: 192.168.49.2
	I1217 00:55:50.553463 1176706 certs.go:195] generating shared ca certs ...
	I1217 00:55:50.553477 1176706 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:55:50.553609 1176706 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 00:55:50.553660 1176706 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 00:55:50.553666 1176706 certs.go:257] generating profile certs ...
	I1217 00:55:50.553779 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.key
	I1217 00:55:50.553831 1176706 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key.05abf8de
	I1217 00:55:50.553877 1176706 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key
	I1217 00:55:50.553979 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 00:55:50.554006 1176706 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 00:55:50.554013 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 00:55:50.554039 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 00:55:50.554060 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:55:50.554085 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 00:55:50.554129 1176706 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 00:55:50.555361 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:55:50.582492 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:55:50.603683 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:55:50.621384 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:55:50.639056 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:55:50.656396 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:55:50.673796 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:55:50.690805 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:55:50.708128 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 00:55:50.726044 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:55:50.743273 1176706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 00:55:50.763262 1176706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:55:50.777113 1176706 ssh_runner.go:195] Run: openssl version
	I1217 00:55:50.783340 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.791319 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 00:55:50.799039 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802914 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.802970 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 00:55:50.844145 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:55:50.851746 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.859382 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:55:50.866837 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870628 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.870686 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:55:50.912088 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:55:50.919506 1176706 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.926804 1176706 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 00:55:50.934239 1176706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938447 1176706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.938514 1176706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 00:55:50.979317 1176706 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:55:50.986668 1176706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:55:50.990400 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:55:51.033890 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:55:51.074982 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:55:51.116748 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:55:51.160579 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:55:51.202188 1176706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:55:51.243239 1176706 kubeadm.go:401] StartCluster: {Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:55:51.243328 1176706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:55:51.243394 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.274971 1176706 cri.go:89] found id: ""
	I1217 00:55:51.275034 1176706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:55:51.283750 1176706 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:55:51.283758 1176706 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:55:51.283810 1176706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:55:51.291948 1176706 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.292487 1176706 kubeconfig.go:125] found "functional-389537" server: "https://192.168.49.2:8441"
	I1217 00:55:51.293778 1176706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:55:51.304922 1176706 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:41:14.220606710 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:55:50.397867980 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:55:51.304944 1176706 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:55:51.304956 1176706 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 00:55:51.305024 1176706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:55:51.335594 1176706 cri.go:89] found id: ""
	I1217 00:55:51.335654 1176706 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:55:51.349252 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:55:51.357284 1176706 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 17 00:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:45 /etc/kubernetes/scheduler.conf
	
	I1217 00:55:51.357346 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:55:51.365155 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:55:51.373122 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.373177 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:55:51.380532 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.387880 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.387941 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:55:51.395488 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:55:51.402971 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:55:51.403027 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:55:51.410207 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:55:51.417914 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:51.465120 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.243254 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.461995 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.527345 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:55:52.573822 1176706 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:55:52.573908 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.074814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:53.574907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.075012 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:54.575023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:55.574684 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.074609 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:56.574663 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.074765 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:57.574635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.074907 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:58.574627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.074088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:55:59.574795 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.097233 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:00.574961 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.074054 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:01.574065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.075050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:02.574031 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.075006 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:03.574216 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.074748 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:04.573974 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.074753 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:05.574034 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.075017 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:06.574061 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.074905 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:07.574698 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.074763 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:08.574614 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.074085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:09.574076 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.074847 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:10.574675 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.074172 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:11.574715 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.074369 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:12.574662 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.074071 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:13.575002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.074917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:14.574153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.074723 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:15.574433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.074632 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:16.574760 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.074421 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:17.574365 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.074110 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:18.574084 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.074083 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:19.574229 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.075007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:20.574915 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.074637 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:21.574418 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.074231 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:22.574859 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.074383 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:23.574046 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.074153 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:24.574749 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.074247 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:25.574077 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.074002 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:26.574149 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.074309 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:27.574050 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.074975 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:28.574187 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.074918 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:29.574916 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.074771 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:30.574779 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.074798 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:31.573985 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.074834 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:32.574776 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.074670 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:33.574866 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.074740 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:34.574090 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.074115 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:35.574007 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.074661 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:36.574687 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.074553 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:37.574236 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.074239 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:38.574036 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.074932 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:39.574096 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.074026 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:40.574255 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.074880 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:41.574038 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.073993 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:42.574088 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.074056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:43.574323 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.074338 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:44.574154 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.074792 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:45.574063 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.074852 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:46.574810 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.074586 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:47.574043 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.075023 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:48.574226 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.074137 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:49.585259 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.074119 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:50.573988 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.074068 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:51.575029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.074819 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:52.574056 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:52.574153 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:52.600363 1176706 cri.go:89] found id: ""
	I1217 00:56:52.600377 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.600384 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:52.600390 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:52.600466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:52.625666 1176706 cri.go:89] found id: ""
	I1217 00:56:52.625679 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.625686 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:52.625692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:52.625750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:52.651207 1176706 cri.go:89] found id: ""
	I1217 00:56:52.651220 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.651228 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:52.651233 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:52.651289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:52.675877 1176706 cri.go:89] found id: ""
	I1217 00:56:52.675891 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.675898 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:52.675904 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:52.675968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:52.705638 1176706 cri.go:89] found id: ""
	I1217 00:56:52.705651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.705658 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:52.705663 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:52.705733 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:52.734795 1176706 cri.go:89] found id: ""
	I1217 00:56:52.734809 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.734816 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:52.734821 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:52.734882 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:52.765098 1176706 cri.go:89] found id: ""
	I1217 00:56:52.765112 1176706 logs.go:282] 0 containers: []
	W1217 00:56:52.765119 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:52.765127 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:52.765138 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:52.797741 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:52.797759 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:52.872988 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:52.873007 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:52.891536 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:52.891552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:52.956983 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:52.948697   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.949297   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.950928   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.951391   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:52.952934   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:52.956994 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:52.957004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.530194 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:55.540066 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:55.540129 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:55.566494 1176706 cri.go:89] found id: ""
	I1217 00:56:55.566509 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.566516 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:55.566521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:55.566579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:55.599453 1176706 cri.go:89] found id: ""
	I1217 00:56:55.599467 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.599474 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:55.599479 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:55.599539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:55.624628 1176706 cri.go:89] found id: ""
	I1217 00:56:55.624651 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.624659 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:55.624664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:55.624720 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:55.650853 1176706 cri.go:89] found id: ""
	I1217 00:56:55.650867 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.650874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:55.650879 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:55.650947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:55.676274 1176706 cri.go:89] found id: ""
	I1217 00:56:55.676287 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.676295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:55.676302 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:55.676363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:55.705470 1176706 cri.go:89] found id: ""
	I1217 00:56:55.705484 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.705491 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:55.705497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:55.705577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:55.729482 1176706 cri.go:89] found id: ""
	I1217 00:56:55.729495 1176706 logs.go:282] 0 containers: []
	W1217 00:56:55.729502 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:55.729510 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:55.729520 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:55.797202 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:55.797223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:55.816424 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:55.816452 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:55.887945 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:55.879676   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.880282   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.881927   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.882448   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:55.883977   11220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:55.887971 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:55.887984 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:55.962011 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:55.962032 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:56:58.492176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:56:58.503876 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:56:58.503952 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:56:58.530086 1176706 cri.go:89] found id: ""
	I1217 00:56:58.530101 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.530108 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:56:58.530114 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:56:58.530175 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:56:58.556063 1176706 cri.go:89] found id: ""
	I1217 00:56:58.556077 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.556084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:56:58.556090 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:56:58.556148 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:56:58.582188 1176706 cri.go:89] found id: ""
	I1217 00:56:58.582202 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.582209 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:56:58.582215 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:56:58.582295 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:56:58.607569 1176706 cri.go:89] found id: ""
	I1217 00:56:58.607583 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.607590 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:56:58.607595 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:56:58.607652 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:56:58.634350 1176706 cri.go:89] found id: ""
	I1217 00:56:58.634364 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.634371 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:56:58.634378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:56:58.634445 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:56:58.664026 1176706 cri.go:89] found id: ""
	I1217 00:56:58.664040 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.664048 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:56:58.664053 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:56:58.664114 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:56:58.689017 1176706 cri.go:89] found id: ""
	I1217 00:56:58.689030 1176706 logs.go:282] 0 containers: []
	W1217 00:56:58.689037 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:56:58.689050 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:56:58.689060 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:56:58.754795 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:56:58.754815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:56:58.775189 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:56:58.775206 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:56:58.849221 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:56:58.841124   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.841631   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843386   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.843732   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:58.845029   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:56:58.849231 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:56:58.849243 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:56:58.922086 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:56:58.922107 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.451030 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:01.460964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:01.461034 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:01.489661 1176706 cri.go:89] found id: ""
	I1217 00:57:01.489685 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.489693 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:01.489698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:01.489767 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:01.515445 1176706 cri.go:89] found id: ""
	I1217 00:57:01.515468 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.515476 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:01.515482 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:01.515549 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:01.540532 1176706 cri.go:89] found id: ""
	I1217 00:57:01.540546 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.540554 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:01.540560 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:01.540629 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:01.569650 1176706 cri.go:89] found id: ""
	I1217 00:57:01.569664 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.569671 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:01.569676 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:01.569738 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:01.596059 1176706 cri.go:89] found id: ""
	I1217 00:57:01.596072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.596080 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:01.596085 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:01.596140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:01.621197 1176706 cri.go:89] found id: ""
	I1217 00:57:01.621211 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.621218 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:01.621224 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:01.621282 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:01.650001 1176706 cri.go:89] found id: ""
	I1217 00:57:01.650014 1176706 logs.go:282] 0 containers: []
	W1217 00:57:01.650022 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:01.650029 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:01.650040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:01.667789 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:01.667805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:01.730637 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:01.722535   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.723059   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.724823   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.725246   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:01.726768   11432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:01.730688 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:01.730705 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:01.804764 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:01.804783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:01.853135 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:01.853152 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.422102 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:04.432445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:04.432511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:04.456733 1176706 cri.go:89] found id: ""
	I1217 00:57:04.456747 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.456754 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:04.456760 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:04.456817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:04.481576 1176706 cri.go:89] found id: ""
	I1217 00:57:04.481591 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.481599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:04.481604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:04.481663 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:04.511390 1176706 cri.go:89] found id: ""
	I1217 00:57:04.511405 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.511412 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:04.511417 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:04.511481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:04.539584 1176706 cri.go:89] found id: ""
	I1217 00:57:04.539608 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.539615 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:04.539621 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:04.539686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:04.564039 1176706 cri.go:89] found id: ""
	I1217 00:57:04.564054 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.564061 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:04.564067 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:04.564126 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:04.588270 1176706 cri.go:89] found id: ""
	I1217 00:57:04.588283 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.588291 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:04.588296 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:04.588352 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:04.615420 1176706 cri.go:89] found id: ""
	I1217 00:57:04.615435 1176706 logs.go:282] 0 containers: []
	W1217 00:57:04.615442 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:04.615450 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:04.615461 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:04.648626 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:04.648647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:04.714893 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:04.714913 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:04.733517 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:04.733535 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:04.824195 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:04.809957   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.810608   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.812310   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.813107   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:04.814847   11552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:04.824206 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:04.824217 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.400917 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:07.410917 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:07.410975 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:07.437282 1176706 cri.go:89] found id: ""
	I1217 00:57:07.437303 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.437315 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:07.437325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:07.437414 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:07.466491 1176706 cri.go:89] found id: ""
	I1217 00:57:07.466506 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.466513 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:07.466518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:07.466585 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:07.491017 1176706 cri.go:89] found id: ""
	I1217 00:57:07.491030 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.491037 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:07.491042 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:07.491100 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:07.516269 1176706 cri.go:89] found id: ""
	I1217 00:57:07.516288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.516295 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:07.516301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:07.516370 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:07.541854 1176706 cri.go:89] found id: ""
	I1217 00:57:07.541867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.541874 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:07.541880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:07.541948 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:07.571479 1176706 cri.go:89] found id: ""
	I1217 00:57:07.571493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.571509 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:07.571516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:07.571576 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:07.597046 1176706 cri.go:89] found id: ""
	I1217 00:57:07.597072 1176706 logs.go:282] 0 containers: []
	W1217 00:57:07.597079 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:07.597087 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:07.597097 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:07.672318 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:07.664081   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.664940   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666480   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.666786   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:07.668263   11635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:07.672336 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:07.672349 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:07.747576 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:07.747595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:07.779509 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:07.779525 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:07.855959 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:07.855980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.376085 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:10.386576 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:10.386639 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:10.413996 1176706 cri.go:89] found id: ""
	I1217 00:57:10.414010 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.414017 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:10.414022 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:10.414082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:10.440046 1176706 cri.go:89] found id: ""
	I1217 00:57:10.440060 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.440067 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:10.440073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:10.440131 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:10.465533 1176706 cri.go:89] found id: ""
	I1217 00:57:10.465547 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.465563 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:10.465569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:10.465631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:10.491563 1176706 cri.go:89] found id: ""
	I1217 00:57:10.491577 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.491585 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:10.491590 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:10.491653 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:10.519680 1176706 cri.go:89] found id: ""
	I1217 00:57:10.519694 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.519710 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:10.519717 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:10.519778 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:10.556939 1176706 cri.go:89] found id: ""
	I1217 00:57:10.556956 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.556963 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:10.556969 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:10.557025 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:10.582061 1176706 cri.go:89] found id: ""
	I1217 00:57:10.582075 1176706 logs.go:282] 0 containers: []
	W1217 00:57:10.582082 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:10.582091 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:10.582102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:10.651854 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:10.651875 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:10.671002 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:10.671020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:10.744191 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:10.735841   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.736607   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738206   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.738786   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:10.740270   11748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:10.744201 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:10.744213 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:10.823224 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:10.823244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:13.353067 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:13.363299 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:13.363363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:13.388077 1176706 cri.go:89] found id: ""
	I1217 00:57:13.388090 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.388098 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:13.388103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:13.388166 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:13.414095 1176706 cri.go:89] found id: ""
	I1217 00:57:13.414109 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.414117 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:13.414122 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:13.414178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:13.439153 1176706 cri.go:89] found id: ""
	I1217 00:57:13.439167 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.439174 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:13.439180 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:13.439237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:13.465255 1176706 cri.go:89] found id: ""
	I1217 00:57:13.465269 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.465277 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:13.465282 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:13.465342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:13.495274 1176706 cri.go:89] found id: ""
	I1217 00:57:13.495288 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.495295 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:13.495301 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:13.495359 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:13.520781 1176706 cri.go:89] found id: ""
	I1217 00:57:13.520795 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.520803 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:13.520808 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:13.520868 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:13.547934 1176706 cri.go:89] found id: ""
	I1217 00:57:13.547948 1176706 logs.go:282] 0 containers: []
	W1217 00:57:13.547955 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:13.547963 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:13.547974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:13.613843 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:13.613863 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:13.632465 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:13.632491 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:13.697651 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:13.689537   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.690026   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.691628   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.692073   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:13.693576   11852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:13.697662 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:13.697673 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:13.766608 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:13.766627 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:16.302176 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:16.312389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:16.312476 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:16.338447 1176706 cri.go:89] found id: ""
	I1217 00:57:16.338461 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.338468 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:16.338473 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:16.338533 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:16.365319 1176706 cri.go:89] found id: ""
	I1217 00:57:16.365333 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.365340 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:16.365346 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:16.365408 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:16.396455 1176706 cri.go:89] found id: ""
	I1217 00:57:16.396476 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.396483 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:16.396489 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:16.396550 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:16.425795 1176706 cri.go:89] found id: ""
	I1217 00:57:16.425809 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.425816 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:16.425822 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:16.425887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:16.454749 1176706 cri.go:89] found id: ""
	I1217 00:57:16.454763 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.454770 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:16.454776 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:16.454834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:16.479542 1176706 cri.go:89] found id: ""
	I1217 00:57:16.479555 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.479562 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:16.479567 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:16.479626 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:16.508783 1176706 cri.go:89] found id: ""
	I1217 00:57:16.508798 1176706 logs.go:282] 0 containers: []
	W1217 00:57:16.508805 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:16.508813 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:16.508824 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:16.577494 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:16.577515 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:16.595191 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:16.595211 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:16.665505 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:16.658250   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.658774   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.659833   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.660214   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:16.661668   11958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:16.665516 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:16.665528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:16.733110 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:16.733132 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:19.271702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:19.282422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:19.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:19.310766 1176706 cri.go:89] found id: ""
	I1217 00:57:19.310781 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.310788 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:19.310794 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:19.310856 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:19.336393 1176706 cri.go:89] found id: ""
	I1217 00:57:19.336407 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.336435 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:19.336441 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:19.336512 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:19.363243 1176706 cri.go:89] found id: ""
	I1217 00:57:19.363258 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.363265 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:19.363270 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:19.363329 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:19.389985 1176706 cri.go:89] found id: ""
	I1217 00:57:19.390000 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.390007 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:19.390013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:19.390073 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:19.416019 1176706 cri.go:89] found id: ""
	I1217 00:57:19.416032 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.416040 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:19.416045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:19.416103 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:19.445523 1176706 cri.go:89] found id: ""
	I1217 00:57:19.445538 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.445545 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:19.445550 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:19.445611 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:19.470033 1176706 cri.go:89] found id: ""
	I1217 00:57:19.470047 1176706 logs.go:282] 0 containers: []
	W1217 00:57:19.470055 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:19.470063 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:19.470075 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:19.535642 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:19.535662 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:19.553701 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:19.553718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:19.615955 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:19.607870   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.608470   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.609972   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.610359   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:19.611841   12068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:19.615966 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:19.615977 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:19.685077 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:19.685098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.217382 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:22.227714 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:22.227775 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:22.252242 1176706 cri.go:89] found id: ""
	I1217 00:57:22.252256 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.252263 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:22.252268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:22.252325 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:22.277476 1176706 cri.go:89] found id: ""
	I1217 00:57:22.277491 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.277498 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:22.277504 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:22.277561 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:22.302807 1176706 cri.go:89] found id: ""
	I1217 00:57:22.302821 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.302829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:22.302834 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:22.302905 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:22.332455 1176706 cri.go:89] found id: ""
	I1217 00:57:22.332469 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.332476 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:22.332483 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:22.332552 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:22.361365 1176706 cri.go:89] found id: ""
	I1217 00:57:22.361380 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.361387 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:22.361392 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:22.361453 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:22.387211 1176706 cri.go:89] found id: ""
	I1217 00:57:22.387224 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.387232 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:22.387237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:22.387297 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:22.413238 1176706 cri.go:89] found id: ""
	I1217 00:57:22.413252 1176706 logs.go:282] 0 containers: []
	W1217 00:57:22.413260 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:22.413267 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:22.413278 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:22.478085 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:22.469661   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.470499   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472209   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.472726   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:22.474224   12167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:22.478096 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:22.478105 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:22.546790 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:22.546813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:22.582711 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:22.582732 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:22.648758 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:22.648780 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.166726 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:25.177337 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:25.177400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:25.202561 1176706 cri.go:89] found id: ""
	I1217 00:57:25.202576 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.202583 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:25.202589 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:25.202650 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:25.231070 1176706 cri.go:89] found id: ""
	I1217 00:57:25.231085 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.231092 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:25.231098 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:25.231162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:25.256786 1176706 cri.go:89] found id: ""
	I1217 00:57:25.256799 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.256806 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:25.256811 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:25.256870 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:25.282392 1176706 cri.go:89] found id: ""
	I1217 00:57:25.282415 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.282423 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:25.282429 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:25.282488 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:25.311168 1176706 cri.go:89] found id: ""
	I1217 00:57:25.311182 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.311189 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:25.311195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:25.311259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:25.339431 1176706 cri.go:89] found id: ""
	I1217 00:57:25.339446 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.339453 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:25.339459 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:25.339517 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:25.365122 1176706 cri.go:89] found id: ""
	I1217 00:57:25.365136 1176706 logs.go:282] 0 containers: []
	W1217 00:57:25.365144 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:25.365152 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:25.365162 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:25.430307 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:25.430326 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:25.447805 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:25.447822 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:25.515790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:25.507095   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.507881   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.509683   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.510253   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:25.511838   12278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:25.515802 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:25.515813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:25.590022 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:25.590049 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.122003 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:28.132581 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:28.132644 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:28.158913 1176706 cri.go:89] found id: ""
	I1217 00:57:28.158927 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.158944 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:28.158950 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:28.159029 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:28.185443 1176706 cri.go:89] found id: ""
	I1217 00:57:28.185478 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.185486 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:28.185492 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:28.185565 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:28.212156 1176706 cri.go:89] found id: ""
	I1217 00:57:28.212180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.212187 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:28.212193 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:28.212303 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:28.238113 1176706 cri.go:89] found id: ""
	I1217 00:57:28.238128 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.238135 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:28.238140 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:28.238198 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:28.267252 1176706 cri.go:89] found id: ""
	I1217 00:57:28.267266 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.267273 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:28.267278 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:28.267335 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:28.299262 1176706 cri.go:89] found id: ""
	I1217 00:57:28.299277 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.299284 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:28.299290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:28.299349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:28.325216 1176706 cri.go:89] found id: ""
	I1217 00:57:28.325231 1176706 logs.go:282] 0 containers: []
	W1217 00:57:28.325247 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:28.325255 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:28.325267 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:28.342976 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:28.342992 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:28.411022 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:28.401954   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.402861   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404487   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.404937   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:28.406641   12383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:28.411033 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:28.411044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:28.479626 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:28.479647 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:28.508235 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:28.508251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.075024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:31.085476 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:31.085543 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:31.115244 1176706 cri.go:89] found id: ""
	I1217 00:57:31.115259 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.115267 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:31.115272 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:31.115332 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:31.146093 1176706 cri.go:89] found id: ""
	I1217 00:57:31.146111 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.146119 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:31.146125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:31.146188 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:31.173490 1176706 cri.go:89] found id: ""
	I1217 00:57:31.173505 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.173512 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:31.173518 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:31.173577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:31.199862 1176706 cri.go:89] found id: ""
	I1217 00:57:31.199876 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.199883 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:31.199889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:31.199953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:31.229151 1176706 cri.go:89] found id: ""
	I1217 00:57:31.229164 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.229172 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:31.229177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:31.229234 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:31.255292 1176706 cri.go:89] found id: ""
	I1217 00:57:31.255306 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.255313 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:31.255319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:31.255378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:31.280011 1176706 cri.go:89] found id: ""
	I1217 00:57:31.280024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:31.280032 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:31.280040 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:31.280050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:31.351624 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:31.351644 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:31.380210 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:31.380226 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:31.448265 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:31.448288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:31.466144 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:31.466161 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:31.530079 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:31.522205   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.522862   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524466   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.524902   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:31.526349   12507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.030804 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:34.041923 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:34.041984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:34.070601 1176706 cri.go:89] found id: ""
	I1217 00:57:34.070617 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.070624 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:34.070630 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:34.070689 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:34.097552 1176706 cri.go:89] found id: ""
	I1217 00:57:34.097566 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.097573 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:34.097579 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:34.097647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:34.124476 1176706 cri.go:89] found id: ""
	I1217 00:57:34.124490 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.124497 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:34.124503 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:34.124580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:34.150077 1176706 cri.go:89] found id: ""
	I1217 00:57:34.150091 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.150099 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:34.150104 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:34.150162 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:34.176964 1176706 cri.go:89] found id: ""
	I1217 00:57:34.176978 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.176992 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:34.176998 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:34.177055 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:34.201831 1176706 cri.go:89] found id: ""
	I1217 00:57:34.201845 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.201852 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:34.201857 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:34.201914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:34.227100 1176706 cri.go:89] found id: ""
	I1217 00:57:34.227114 1176706 logs.go:282] 0 containers: []
	W1217 00:57:34.227122 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:34.227129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:34.227140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:34.292098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:34.283901   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.284720   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286244   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.286732   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:34.288206   12593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:34.292108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:34.292119 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:34.361262 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:34.361287 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:34.395072 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:34.395087 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:34.462475 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:34.462498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:36.980702 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:36.992944 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:36.993003 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:37.025574 1176706 cri.go:89] found id: ""
	I1217 00:57:37.025592 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.025616 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:37.025622 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:37.025707 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:37.054876 1176706 cri.go:89] found id: ""
	I1217 00:57:37.054890 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.054897 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:37.054903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:37.054968 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:37.084974 1176706 cri.go:89] found id: ""
	I1217 00:57:37.084987 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.084995 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:37.085000 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:37.085059 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:37.110853 1176706 cri.go:89] found id: ""
	I1217 00:57:37.110867 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.110874 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:37.110883 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:37.110941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:37.137064 1176706 cri.go:89] found id: ""
	I1217 00:57:37.137083 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.137090 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:37.137096 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:37.137159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:37.167116 1176706 cri.go:89] found id: ""
	I1217 00:57:37.167130 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.167148 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:37.167162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:37.167230 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:37.192827 1176706 cri.go:89] found id: ""
	I1217 00:57:37.192848 1176706 logs.go:282] 0 containers: []
	W1217 00:57:37.192856 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:37.192863 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:37.192874 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:37.210956 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:37.210974 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:37.275882 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:37.268233   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.268669   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270167   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.270500   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:37.272013   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:37.275893 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:37.275904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:37.344194 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:37.344215 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:37.375642 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:37.375658 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:39.944605 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:39.954951 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:39.955014 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:39.984302 1176706 cri.go:89] found id: ""
	I1217 00:57:39.984316 1176706 logs.go:282] 0 containers: []
	W1217 00:57:39.984323 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:39.984328 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:39.984383 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:40.029442 1176706 cri.go:89] found id: ""
	I1217 00:57:40.029458 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.029466 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:40.029471 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:40.029538 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:40.063022 1176706 cri.go:89] found id: ""
	I1217 00:57:40.063037 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.063044 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:40.063049 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:40.063110 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:40.094257 1176706 cri.go:89] found id: ""
	I1217 00:57:40.094272 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.094280 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:40.094286 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:40.094349 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:40.127887 1176706 cri.go:89] found id: ""
	I1217 00:57:40.127901 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.127908 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:40.127913 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:40.127972 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:40.155475 1176706 cri.go:89] found id: ""
	I1217 00:57:40.155489 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.155496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:40.155502 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:40.155560 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:40.181940 1176706 cri.go:89] found id: ""
	I1217 00:57:40.181955 1176706 logs.go:282] 0 containers: []
	W1217 00:57:40.181962 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:40.181970 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:40.181980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:40.254464 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:40.254484 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:40.285810 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:40.285825 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:40.352509 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:40.352528 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:40.370334 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:40.370356 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:40.432624 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:40.424017   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.424714   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426434   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.426998   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:40.428630   12816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:42.932898 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:42.943186 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:42.943245 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:42.971121 1176706 cri.go:89] found id: ""
	I1217 00:57:42.971137 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.971144 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:42.971149 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:42.971207 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:42.997154 1176706 cri.go:89] found id: ""
	I1217 00:57:42.997169 1176706 logs.go:282] 0 containers: []
	W1217 00:57:42.997175 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:42.997181 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:42.997240 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:43.034752 1176706 cri.go:89] found id: ""
	I1217 00:57:43.034767 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.034775 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:43.034781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:43.034840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:43.064326 1176706 cri.go:89] found id: ""
	I1217 00:57:43.064339 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.064347 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:43.064352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:43.064428 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:43.095997 1176706 cri.go:89] found id: ""
	I1217 00:57:43.096011 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.096019 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:43.096024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:43.096082 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:43.126545 1176706 cri.go:89] found id: ""
	I1217 00:57:43.126560 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.126568 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:43.126573 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:43.126633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:43.157043 1176706 cri.go:89] found id: ""
	I1217 00:57:43.157058 1176706 logs.go:282] 0 containers: []
	W1217 00:57:43.157065 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:43.157073 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:43.157102 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:43.223228 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:43.223248 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:43.241053 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:43.241070 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:43.307388 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:43.299156   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.299931   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301567   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.301950   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:43.303461   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:43.307398 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:43.307409 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:43.376649 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:43.376669 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:45.908814 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:45.918992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:45.919051 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:45.944157 1176706 cri.go:89] found id: ""
	I1217 00:57:45.944170 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.944178 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:45.944183 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:45.944242 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:45.969417 1176706 cri.go:89] found id: ""
	I1217 00:57:45.969431 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.969438 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:45.969444 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:45.969502 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:45.995472 1176706 cri.go:89] found id: ""
	I1217 00:57:45.995486 1176706 logs.go:282] 0 containers: []
	W1217 00:57:45.995494 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:45.995499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:45.995566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:46.034994 1176706 cri.go:89] found id: ""
	I1217 00:57:46.035007 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.035015 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:46.035020 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:46.035081 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:46.065460 1176706 cri.go:89] found id: ""
	I1217 00:57:46.065473 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.065480 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:46.065486 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:46.065559 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:46.092450 1176706 cri.go:89] found id: ""
	I1217 00:57:46.092465 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.092472 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:46.092478 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:46.092557 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:46.122198 1176706 cri.go:89] found id: ""
	I1217 00:57:46.122212 1176706 logs.go:282] 0 containers: []
	W1217 00:57:46.122221 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:46.122229 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:46.122241 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:46.140129 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:46.140147 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:46.204790 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:46.196093   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.196769   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198320   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.198847   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:46.200339   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:46.204800 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:46.204810 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:46.273034 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:46.273054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:46.300763 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:46.300778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:48.875764 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:48.886304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:48.886369 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:48.923231 1176706 cri.go:89] found id: ""
	I1217 00:57:48.923246 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.923254 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:48.923259 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:48.923334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:48.951521 1176706 cri.go:89] found id: ""
	I1217 00:57:48.951536 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.951544 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:48.951549 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:48.951610 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:48.977574 1176706 cri.go:89] found id: ""
	I1217 00:57:48.977588 1176706 logs.go:282] 0 containers: []
	W1217 00:57:48.977595 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:48.977600 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:48.977661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:49.016389 1176706 cri.go:89] found id: ""
	I1217 00:57:49.016402 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.016410 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:49.016446 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:49.016511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:49.050180 1176706 cri.go:89] found id: ""
	I1217 00:57:49.050193 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.050201 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:49.050206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:49.050271 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:49.088387 1176706 cri.go:89] found id: ""
	I1217 00:57:49.088401 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.088409 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:49.088445 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:49.088508 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:49.118579 1176706 cri.go:89] found id: ""
	I1217 00:57:49.118593 1176706 logs.go:282] 0 containers: []
	W1217 00:57:49.118600 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:49.118608 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:49.118618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:49.189917 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:49.189938 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:49.208217 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:49.208234 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:49.270961 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:49.262487   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.263396   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265038   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.265400   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:49.266947   13114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:49.270977 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:49.270988 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:49.340033 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:49.340054 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:51.873428 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:51.883781 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:51.883840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:51.908479 1176706 cri.go:89] found id: ""
	I1217 00:57:51.908493 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.908500 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:51.908505 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:51.908562 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:51.938045 1176706 cri.go:89] found id: ""
	I1217 00:57:51.938061 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.938068 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:51.938073 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:51.938135 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:51.964570 1176706 cri.go:89] found id: ""
	I1217 00:57:51.964585 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.964592 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:51.964597 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:51.964654 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:51.989700 1176706 cri.go:89] found id: ""
	I1217 00:57:51.989714 1176706 logs.go:282] 0 containers: []
	W1217 00:57:51.989722 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:51.989727 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:51.989784 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:52.030756 1176706 cri.go:89] found id: ""
	I1217 00:57:52.030771 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.030779 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:52.030786 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:52.030860 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:52.067806 1176706 cri.go:89] found id: ""
	I1217 00:57:52.067829 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.067838 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:52.067845 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:52.067915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:52.097071 1176706 cri.go:89] found id: ""
	I1217 00:57:52.097102 1176706 logs.go:282] 0 containers: []
	W1217 00:57:52.097110 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:52.097118 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:52.097128 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:52.169931 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:52.169952 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:52.202012 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:52.202031 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:52.267897 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:52.267917 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:52.286898 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:52.286920 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:52.352095 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:52.343996   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.344455   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346179   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.346816   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:52.348264   13231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:54.853773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:54.863649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:54.863712 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:54.888435 1176706 cri.go:89] found id: ""
	I1217 00:57:54.888449 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.888456 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:54.888462 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:54.888523 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:54.927009 1176706 cri.go:89] found id: ""
	I1217 00:57:54.927024 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.927031 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:54.927037 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:54.927095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:54.953405 1176706 cri.go:89] found id: ""
	I1217 00:57:54.953420 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.953428 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:54.953434 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:54.953493 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:54.979162 1176706 cri.go:89] found id: ""
	I1217 00:57:54.979176 1176706 logs.go:282] 0 containers: []
	W1217 00:57:54.979183 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:54.979189 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:54.979256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:55.025542 1176706 cri.go:89] found id: ""
	I1217 00:57:55.025564 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.025572 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:55.025577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:55.025641 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:55.059408 1176706 cri.go:89] found id: ""
	I1217 00:57:55.059422 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.059429 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:55.059435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:55.059492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:55.085846 1176706 cri.go:89] found id: ""
	I1217 00:57:55.085860 1176706 logs.go:282] 0 containers: []
	W1217 00:57:55.085867 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:55.085875 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:55.085884 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:55.154061 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:55.154083 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:57:55.182650 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:55.182667 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:55.252924 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:55.252945 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:55.271464 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:55.271481 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:55.340175 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:55.331544   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.332127   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.333902   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.334499   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:55.336006   13338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:57.840461 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:57:57.853057 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:57:57.853178 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:57:57.883066 1176706 cri.go:89] found id: ""
	I1217 00:57:57.883081 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.883088 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:57:57.883094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:57:57.883152 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:57:57.909166 1176706 cri.go:89] found id: ""
	I1217 00:57:57.909180 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.909189 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:57:57.909195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:57:57.909255 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:57:57.935701 1176706 cri.go:89] found id: ""
	I1217 00:57:57.935716 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.935733 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:57:57.935739 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:57:57.935805 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:57:57.969374 1176706 cri.go:89] found id: ""
	I1217 00:57:57.969397 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.969404 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:57:57.969410 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:57:57.969481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:57:57.995365 1176706 cri.go:89] found id: ""
	I1217 00:57:57.995379 1176706 logs.go:282] 0 containers: []
	W1217 00:57:57.995397 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:57:57.995404 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:57:57.995460 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:57:58.025187 1176706 cri.go:89] found id: ""
	I1217 00:57:58.025207 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.025215 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:57:58.025221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:57:58.025343 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:57:58.062705 1176706 cri.go:89] found id: ""
	I1217 00:57:58.062719 1176706 logs.go:282] 0 containers: []
	W1217 00:57:58.062738 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:57:58.062745 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:57:58.062755 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:57:58.135108 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:57:58.135129 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:57:58.154038 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:57:58.154058 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:57:58.219558 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:57:58.210777   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.211463   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.212991   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.213513   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:58.215078   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:57:58.219569 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:57:58.219582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:57:58.287658 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:57:58.287678 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:00.817470 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:00.827992 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:00.828056 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:00.852955 1176706 cri.go:89] found id: ""
	I1217 00:58:00.852969 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.852976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:00.852983 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:00.853043 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:00.877725 1176706 cri.go:89] found id: ""
	I1217 00:58:00.877739 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.877746 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:00.877751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:00.877811 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:00.901883 1176706 cri.go:89] found id: ""
	I1217 00:58:00.901897 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.901905 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:00.901910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:00.901965 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:00.928695 1176706 cri.go:89] found id: ""
	I1217 00:58:00.928709 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.928716 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:00.928722 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:00.928780 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:00.953517 1176706 cri.go:89] found id: ""
	I1217 00:58:00.953531 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.953538 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:00.953544 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:00.953601 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:00.982916 1176706 cri.go:89] found id: ""
	I1217 00:58:00.982930 1176706 logs.go:282] 0 containers: []
	W1217 00:58:00.982946 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:00.982952 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:00.983021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:01.012486 1176706 cri.go:89] found id: ""
	I1217 00:58:01.012510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:01.012518 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:01.012526 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:01.012538 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:01.034573 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:01.034595 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:01.107160 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:01.097516   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.098451   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100366   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.100790   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:01.102608   13530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:01.107170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:01.107180 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:01.180136 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:01.180158 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:01.212434 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:01.212451 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:03.780773 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:03.791245 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:03.791309 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:03.819281 1176706 cri.go:89] found id: ""
	I1217 00:58:03.819296 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.819304 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:03.819309 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:03.819367 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:03.847330 1176706 cri.go:89] found id: ""
	I1217 00:58:03.847344 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.847351 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:03.847357 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:03.847416 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:03.874793 1176706 cri.go:89] found id: ""
	I1217 00:58:03.874806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.874814 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:03.874819 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:03.874883 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:03.904651 1176706 cri.go:89] found id: ""
	I1217 00:58:03.904665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.904672 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:03.904678 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:03.904744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:03.930157 1176706 cri.go:89] found id: ""
	I1217 00:58:03.930178 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.930186 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:03.930191 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:03.930252 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:03.960348 1176706 cri.go:89] found id: ""
	I1217 00:58:03.960371 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.960380 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:03.960386 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:03.960473 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:03.985501 1176706 cri.go:89] found id: ""
	I1217 00:58:03.985515 1176706 logs.go:282] 0 containers: []
	W1217 00:58:03.985523 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:03.985530 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:03.985541 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:04.005563 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:04.005592 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:04.085204 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:04.076669   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.077344   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.078951   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.079451   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:04.080991   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:04.085219 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:04.085231 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:04.154363 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:04.154385 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:04.182481 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:04.182498 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:06.754413 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:06.765192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:06.765266 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:06.792761 1176706 cri.go:89] found id: ""
	I1217 00:58:06.792779 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.792786 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:06.792791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:06.792850 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:06.817882 1176706 cri.go:89] found id: ""
	I1217 00:58:06.817896 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.817903 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:06.817909 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:06.817967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:06.843295 1176706 cri.go:89] found id: ""
	I1217 00:58:06.843309 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.843316 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:06.843321 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:06.843380 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:06.871025 1176706 cri.go:89] found id: ""
	I1217 00:58:06.871039 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.871046 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:06.871052 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:06.871109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:06.899109 1176706 cri.go:89] found id: ""
	I1217 00:58:06.899124 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.899132 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:06.899137 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:06.899212 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:06.923948 1176706 cri.go:89] found id: ""
	I1217 00:58:06.923962 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.923980 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:06.923987 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:06.924045 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:06.948813 1176706 cri.go:89] found id: ""
	I1217 00:58:06.948827 1176706 logs.go:282] 0 containers: []
	W1217 00:58:06.948834 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:06.948842 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:06.948853 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:07.015114 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:07.015140 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:07.034991 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:07.035010 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:07.105757 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:07.097668   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.098261   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.099747   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.100085   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:07.101370   13744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:07.105767 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:07.105778 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:07.177693 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:07.177717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:09.709755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:09.720409 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:09.720507 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:09.745603 1176706 cri.go:89] found id: ""
	I1217 00:58:09.745618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.745626 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:09.745631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:09.745691 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:09.775492 1176706 cri.go:89] found id: ""
	I1217 00:58:09.775507 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.775515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:09.775520 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:09.775579 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:09.801149 1176706 cri.go:89] found id: ""
	I1217 00:58:09.801164 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.801171 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:09.801177 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:09.801238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:09.830147 1176706 cri.go:89] found id: ""
	I1217 00:58:09.830160 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.830168 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:09.830173 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:09.830232 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:09.858791 1176706 cri.go:89] found id: ""
	I1217 00:58:09.858806 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.858825 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:09.858832 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:09.858911 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:09.884827 1176706 cri.go:89] found id: ""
	I1217 00:58:09.884842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.884849 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:09.884855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:09.884918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:09.910380 1176706 cri.go:89] found id: ""
	I1217 00:58:09.910394 1176706 logs.go:282] 0 containers: []
	W1217 00:58:09.910402 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:09.910409 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:09.910420 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:09.976905 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:09.976924 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:09.995004 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:09.995027 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:10.084593 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:10.071867   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.076504   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.077251   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079000   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:10.079423   13847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:10.084604 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:10.084614 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:10.157583 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:10.157604 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.691225 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:12.701275 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:12.701340 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:12.730986 1176706 cri.go:89] found id: ""
	I1217 00:58:12.731000 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.731018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:12.731024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:12.731084 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:12.757010 1176706 cri.go:89] found id: ""
	I1217 00:58:12.757029 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.757037 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:12.757045 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:12.757119 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:12.782232 1176706 cri.go:89] found id: ""
	I1217 00:58:12.782245 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.782252 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:12.782257 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:12.782314 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:12.808352 1176706 cri.go:89] found id: ""
	I1217 00:58:12.808366 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.808373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:12.808378 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:12.808472 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:12.834094 1176706 cri.go:89] found id: ""
	I1217 00:58:12.834109 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.834116 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:12.834121 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:12.834184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:12.861537 1176706 cri.go:89] found id: ""
	I1217 00:58:12.861551 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.861558 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:12.861564 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:12.861625 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:12.891320 1176706 cri.go:89] found id: ""
	I1217 00:58:12.891334 1176706 logs.go:282] 0 containers: []
	W1217 00:58:12.891351 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:12.891360 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:12.891373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:12.961252 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:12.961272 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:12.990873 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:12.990889 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:13.068166 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:13.068185 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:13.087641 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:13.087660 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:13.158967 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:13.149774   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.150788   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.152645   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.153247   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:13.154939   13972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:15.660635 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:15.670593 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:15.670685 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:15.695674 1176706 cri.go:89] found id: ""
	I1217 00:58:15.695688 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.695695 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:15.695700 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:15.695757 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:15.723007 1176706 cri.go:89] found id: ""
	I1217 00:58:15.723020 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.723028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:15.723033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:15.723093 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:15.752134 1176706 cri.go:89] found id: ""
	I1217 00:58:15.752149 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.752156 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:15.752161 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:15.752219 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:15.777521 1176706 cri.go:89] found id: ""
	I1217 00:58:15.777535 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.777542 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:15.777547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:15.777606 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:15.805205 1176706 cri.go:89] found id: ""
	I1217 00:58:15.805220 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.805233 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:15.805239 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:15.805296 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:15.830102 1176706 cri.go:89] found id: ""
	I1217 00:58:15.830116 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.830123 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:15.830129 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:15.830191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:15.859258 1176706 cri.go:89] found id: ""
	I1217 00:58:15.859272 1176706 logs.go:282] 0 containers: []
	W1217 00:58:15.859279 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:15.859297 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:15.859307 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:15.924910 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:15.924930 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:15.943203 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:15.943219 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:16.011016 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:16.000728   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.001626   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.003694   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.004151   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:16.006159   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:16.011027 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:16.011038 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:16.094076 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:16.094096 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:18.624032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:18.634861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:18.634925 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:18.660502 1176706 cri.go:89] found id: ""
	I1217 00:58:18.660528 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.660536 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:18.660541 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:18.660600 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:18.685828 1176706 cri.go:89] found id: ""
	I1217 00:58:18.685841 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.685848 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:18.685854 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:18.685920 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:18.716173 1176706 cri.go:89] found id: ""
	I1217 00:58:18.716187 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.716194 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:18.716199 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:18.716260 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:18.742960 1176706 cri.go:89] found id: ""
	I1217 00:58:18.742975 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.742983 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:18.742988 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:18.743046 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:18.768597 1176706 cri.go:89] found id: ""
	I1217 00:58:18.768610 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.768623 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:18.768628 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:18.768687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:18.795244 1176706 cri.go:89] found id: ""
	I1217 00:58:18.795267 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.795276 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:18.795281 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:18.795355 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:18.826316 1176706 cri.go:89] found id: ""
	I1217 00:58:18.826330 1176706 logs.go:282] 0 containers: []
	W1217 00:58:18.826337 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:18.826345 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:18.826354 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:18.892936 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:18.892954 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:18.911274 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:18.911292 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:18.973399 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:18.965510   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.966086   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.967626   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.968080   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:18.969596   14161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:18.973409 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:18.973432 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:19.052103 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:19.052124 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.589056 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:21.599320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:21.599382 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:21.626547 1176706 cri.go:89] found id: ""
	I1217 00:58:21.626561 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.626568 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:21.626574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:21.626631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:21.651881 1176706 cri.go:89] found id: ""
	I1217 00:58:21.651895 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.651902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:21.651910 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:21.651967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:21.677496 1176706 cri.go:89] found id: ""
	I1217 00:58:21.677510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.677519 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:21.677524 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:21.677580 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:21.701536 1176706 cri.go:89] found id: ""
	I1217 00:58:21.701550 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.701557 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:21.701562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:21.701619 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:21.725663 1176706 cri.go:89] found id: ""
	I1217 00:58:21.725677 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.725695 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:21.725701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:21.725772 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:21.749912 1176706 cri.go:89] found id: ""
	I1217 00:58:21.749926 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.749937 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:21.749943 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:21.750000 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:21.774360 1176706 cri.go:89] found id: ""
	I1217 00:58:21.774374 1176706 logs.go:282] 0 containers: []
	W1217 00:58:21.774381 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:21.774389 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:21.774399 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:21.841964 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:21.841983 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:21.870200 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:21.870218 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:21.943734 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:21.943754 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:21.961798 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:21.961816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:22.037147 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:22.027622   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.028729   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.029558   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.030679   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:22.031527   14275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.537433 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:24.547596 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:24.547661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:24.575282 1176706 cri.go:89] found id: ""
	I1217 00:58:24.575297 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.575306 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:24.575312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:24.575371 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:24.600578 1176706 cri.go:89] found id: ""
	I1217 00:58:24.600592 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.600599 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:24.600604 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:24.600665 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:24.626604 1176706 cri.go:89] found id: ""
	I1217 00:58:24.626618 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.626626 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:24.626631 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:24.626687 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:24.652284 1176706 cri.go:89] found id: ""
	I1217 00:58:24.652298 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.652316 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:24.652323 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:24.652381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:24.681413 1176706 cri.go:89] found id: ""
	I1217 00:58:24.681426 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.681433 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:24.681439 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:24.681495 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:24.709801 1176706 cri.go:89] found id: ""
	I1217 00:58:24.709815 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.709822 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:24.709830 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:24.709887 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:24.741982 1176706 cri.go:89] found id: ""
	I1217 00:58:24.741995 1176706 logs.go:282] 0 containers: []
	W1217 00:58:24.742010 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:24.742018 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:24.742029 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:24.806559 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:24.798575   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.799053   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.800685   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.801164   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:24.802733   14363 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:24.806571 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:24.806581 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:24.875943 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:24.875962 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:24.904944 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:24.904960 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:24.972857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:24.972878 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.491741 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:27.502162 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:27.502241 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:27.528328 1176706 cri.go:89] found id: ""
	I1217 00:58:27.528343 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.528350 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:27.528356 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:27.528455 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:27.558520 1176706 cri.go:89] found id: ""
	I1217 00:58:27.558534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.558541 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:27.558547 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:27.558605 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:27.587047 1176706 cri.go:89] found id: ""
	I1217 00:58:27.587061 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.587070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:27.587075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:27.587133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:27.615351 1176706 cri.go:89] found id: ""
	I1217 00:58:27.615365 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.615373 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:27.615381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:27.615443 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:27.640936 1176706 cri.go:89] found id: ""
	I1217 00:58:27.640950 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.640959 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:27.640964 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:27.641021 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:27.667985 1176706 cri.go:89] found id: ""
	I1217 00:58:27.667999 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.668007 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:27.668013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:27.668077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:27.694148 1176706 cri.go:89] found id: ""
	I1217 00:58:27.694162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:27.694170 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:27.694177 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:27.694188 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:27.764618 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:27.764639 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:27.784025 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:27.784040 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:27.852310 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:27.844025   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.844890   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846482   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.846795   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:27.848347   14470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:27.852320 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:27.852331 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:27.922044 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:27.922065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:30.450766 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:30.460791 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:30.460852 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:30.485997 1176706 cri.go:89] found id: ""
	I1217 00:58:30.486011 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.486018 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:30.486023 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:30.486080 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:30.512125 1176706 cri.go:89] found id: ""
	I1217 00:58:30.512138 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.512157 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:30.512163 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:30.512221 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:30.538512 1176706 cri.go:89] found id: ""
	I1217 00:58:30.538526 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.538533 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:30.538539 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:30.538597 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:30.564757 1176706 cri.go:89] found id: ""
	I1217 00:58:30.564771 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.564778 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:30.564784 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:30.564842 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:30.594808 1176706 cri.go:89] found id: ""
	I1217 00:58:30.594821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.594840 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:30.594846 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:30.594919 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:30.624595 1176706 cri.go:89] found id: ""
	I1217 00:58:30.624609 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.624617 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:30.624623 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:30.624683 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:30.653013 1176706 cri.go:89] found id: ""
	I1217 00:58:30.653027 1176706 logs.go:282] 0 containers: []
	W1217 00:58:30.653034 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:30.653042 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:30.653052 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:30.720030 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:30.720050 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:30.738237 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:30.738255 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:30.801692 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:30.793674   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.794386   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.795891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.796359   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:30.797891   14574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:30.801705 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:30.801717 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:30.870606 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:30.870628 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.401439 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:33.411804 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:33.411865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:33.437664 1176706 cri.go:89] found id: ""
	I1217 00:58:33.437678 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.437686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:33.437692 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:33.437752 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:33.463774 1176706 cri.go:89] found id: ""
	I1217 00:58:33.463796 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.463803 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:33.463809 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:33.463865 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:33.492800 1176706 cri.go:89] found id: ""
	I1217 00:58:33.492822 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.492829 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:33.492835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:33.492896 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:33.518396 1176706 cri.go:89] found id: ""
	I1217 00:58:33.518410 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.518417 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:33.518422 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:33.518481 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:33.545369 1176706 cri.go:89] found id: ""
	I1217 00:58:33.545385 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.545393 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:33.545398 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:33.545469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:33.571642 1176706 cri.go:89] found id: ""
	I1217 00:58:33.571665 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.571673 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:33.571679 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:33.571751 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:33.598928 1176706 cri.go:89] found id: ""
	I1217 00:58:33.598953 1176706 logs.go:282] 0 containers: []
	W1217 00:58:33.598961 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:33.598970 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:33.598980 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:33.617218 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:33.617237 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:33.681042 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:33.672222   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.672730   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.674452   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.675037   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:33.676582   14679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:33.681053 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:33.681064 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:33.750561 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:33.750582 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:33.779618 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:33.779637 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.351872 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:36.361748 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:36.361812 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:36.387484 1176706 cri.go:89] found id: ""
	I1217 00:58:36.387498 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.387505 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:36.387511 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:36.387567 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:36.413880 1176706 cri.go:89] found id: ""
	I1217 00:58:36.413894 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.413902 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:36.413922 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:36.413979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:36.439073 1176706 cri.go:89] found id: ""
	I1217 00:58:36.439087 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.439095 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:36.439100 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:36.439159 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:36.464148 1176706 cri.go:89] found id: ""
	I1217 00:58:36.464162 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.464169 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:36.464175 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:36.464237 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:36.489659 1176706 cri.go:89] found id: ""
	I1217 00:58:36.489673 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.489681 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:36.489686 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:36.489744 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:36.514865 1176706 cri.go:89] found id: ""
	I1217 00:58:36.514879 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.514887 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:36.514892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:36.514953 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:36.545081 1176706 cri.go:89] found id: ""
	I1217 00:58:36.545095 1176706 logs.go:282] 0 containers: []
	W1217 00:58:36.545103 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:36.545110 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:36.545120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:36.620571 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:36.620599 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:36.652294 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:36.652313 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:36.720685 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:36.720708 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:36.738692 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:36.738709 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:36.804409 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:36.795462   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.796112   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.797857   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.798439   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:36.800108   14798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.304571 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:39.315407 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:39.315469 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:39.343748 1176706 cri.go:89] found id: ""
	I1217 00:58:39.343762 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.343769 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:39.343775 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:39.343834 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:39.371633 1176706 cri.go:89] found id: ""
	I1217 00:58:39.371648 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.371655 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:39.371661 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:39.371750 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:39.397168 1176706 cri.go:89] found id: ""
	I1217 00:58:39.397183 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.397190 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:39.397196 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:39.397254 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:39.422379 1176706 cri.go:89] found id: ""
	I1217 00:58:39.422393 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.422400 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:39.422406 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:39.422466 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:39.451362 1176706 cri.go:89] found id: ""
	I1217 00:58:39.451376 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.451384 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:39.451389 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:39.451447 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:39.476838 1176706 cri.go:89] found id: ""
	I1217 00:58:39.476852 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.476862 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:39.476867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:39.476926 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:39.501892 1176706 cri.go:89] found id: ""
	I1217 00:58:39.501905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:39.501912 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:39.501924 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:39.501933 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:39.571771 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:39.563239   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.563917   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.565641   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.566113   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:39.567797   14883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:39.571783 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:39.571793 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:39.642123 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:39.642144 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:39.673585 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:39.673602 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:39.742217 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:39.742236 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.260825 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:42.274064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:42.274140 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:42.315322 1176706 cri.go:89] found id: ""
	I1217 00:58:42.315336 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.315346 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:42.315352 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:42.315432 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:42.348891 1176706 cri.go:89] found id: ""
	I1217 00:58:42.348906 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.348914 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:42.348920 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:42.348984 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:42.376853 1176706 cri.go:89] found id: ""
	I1217 00:58:42.376867 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.376874 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:42.376880 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:42.376940 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:42.402292 1176706 cri.go:89] found id: ""
	I1217 00:58:42.402307 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.402315 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:42.402320 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:42.402381 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:42.432293 1176706 cri.go:89] found id: ""
	I1217 00:58:42.432306 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.432314 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:42.432319 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:42.432378 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:42.459173 1176706 cri.go:89] found id: ""
	I1217 00:58:42.459188 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.459195 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:42.459200 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:42.459259 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:42.485520 1176706 cri.go:89] found id: ""
	I1217 00:58:42.485534 1176706 logs.go:282] 0 containers: []
	W1217 00:58:42.485541 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:42.485549 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:42.485562 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:42.553260 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:42.553281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:42.571244 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:42.571261 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:42.633598 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:42.625508   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.626225   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.627807   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.628327   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:42.629889   14997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:42.633609 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:42.633622 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:42.706387 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:42.706408 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.237565 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:45.259348 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:45.259429 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:45.305574 1176706 cri.go:89] found id: ""
	I1217 00:58:45.305589 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.305597 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:45.305602 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:45.305664 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:45.349162 1176706 cri.go:89] found id: ""
	I1217 00:58:45.349177 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.349187 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:45.349192 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:45.349256 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:45.376828 1176706 cri.go:89] found id: ""
	I1217 00:58:45.376842 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.376849 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:45.376855 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:45.376915 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:45.402470 1176706 cri.go:89] found id: ""
	I1217 00:58:45.402485 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.402492 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:45.402497 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:45.402554 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:45.429756 1176706 cri.go:89] found id: ""
	I1217 00:58:45.429790 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.429820 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:45.429842 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:45.429980 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:45.459618 1176706 cri.go:89] found id: ""
	I1217 00:58:45.459632 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.459640 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:45.459647 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:45.459709 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:45.486504 1176706 cri.go:89] found id: ""
	I1217 00:58:45.486518 1176706 logs.go:282] 0 containers: []
	W1217 00:58:45.486526 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:45.486533 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:45.486549 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:45.505026 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:45.505044 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:45.569592 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:45.560398   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561016   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.561996   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563557   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:45.563995   15099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:45.569602 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:45.569612 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:45.642249 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:45.642270 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:45.673783 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:45.673799 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.241441 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:48.253986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:48.254052 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:48.298549 1176706 cri.go:89] found id: ""
	I1217 00:58:48.298562 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.298569 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:48.298575 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:48.298633 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:48.329982 1176706 cri.go:89] found id: ""
	I1217 00:58:48.329997 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.330004 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:48.330010 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:48.330068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:48.356278 1176706 cri.go:89] found id: ""
	I1217 00:58:48.356291 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.356298 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:48.356304 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:48.356363 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:48.381931 1176706 cri.go:89] found id: ""
	I1217 00:58:48.381944 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.381952 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:48.381957 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:48.382012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:48.408076 1176706 cri.go:89] found id: ""
	I1217 00:58:48.408091 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.408098 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:48.408103 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:48.408167 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:48.438515 1176706 cri.go:89] found id: ""
	I1217 00:58:48.438529 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.438536 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:48.438542 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:48.438615 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:48.464771 1176706 cri.go:89] found id: ""
	I1217 00:58:48.464784 1176706 logs.go:282] 0 containers: []
	W1217 00:58:48.464791 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:48.464800 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:48.464815 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:48.531756 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:48.531777 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:48.550180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:48.550197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:48.614503 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:48.606501   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.607019   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.608643   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.609069   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:48.610690   15206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:48.614514 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:48.614524 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:48.683497 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:48.683519 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:51.214024 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:51.224516 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:51.224581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:51.250104 1176706 cri.go:89] found id: ""
	I1217 00:58:51.250118 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.250125 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:51.250131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:51.250204 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:51.287241 1176706 cri.go:89] found id: ""
	I1217 00:58:51.287255 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.287263 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:51.287268 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:51.287334 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:51.326285 1176706 cri.go:89] found id: ""
	I1217 00:58:51.326299 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.326306 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:51.326312 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:51.326375 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:51.353495 1176706 cri.go:89] found id: ""
	I1217 00:58:51.353509 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.353516 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:51.353521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:51.353577 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:51.379404 1176706 cri.go:89] found id: ""
	I1217 00:58:51.379417 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.379425 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:51.379430 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:51.379489 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:51.405891 1176706 cri.go:89] found id: ""
	I1217 00:58:51.405905 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.405912 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:51.405919 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:51.405979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:51.431497 1176706 cri.go:89] found id: ""
	I1217 00:58:51.431510 1176706 logs.go:282] 0 containers: []
	W1217 00:58:51.431529 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:51.431537 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:51.431547 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:51.497786 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:51.497805 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:51.516101 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:51.516120 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:51.584128 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:51.576120   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.576772   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578335   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.578768   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:51.580234   15308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:51.584139 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:51.584150 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:51.652739 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:51.652760 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:54.182755 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:54.194058 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:54.194127 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:54.219806 1176706 cri.go:89] found id: ""
	I1217 00:58:54.219821 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.219828 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:54.219833 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:54.219894 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:54.245268 1176706 cri.go:89] found id: ""
	I1217 00:58:54.245281 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.245289 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:54.245294 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:54.245353 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:54.278676 1176706 cri.go:89] found id: ""
	I1217 00:58:54.278690 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.278697 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:54.278703 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:54.278766 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:54.305307 1176706 cri.go:89] found id: ""
	I1217 00:58:54.305321 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.305329 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:54.305334 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:54.305400 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:54.330666 1176706 cri.go:89] found id: ""
	I1217 00:58:54.330680 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.330688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:54.330693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:54.330763 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:54.356855 1176706 cri.go:89] found id: ""
	I1217 00:58:54.356875 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.356886 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:54.356892 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:54.356985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:54.390389 1176706 cri.go:89] found id: ""
	I1217 00:58:54.390404 1176706 logs.go:282] 0 containers: []
	W1217 00:58:54.390411 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:54.390419 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:54.390429 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:54.456633 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:54.456654 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:54.474716 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:54.474734 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:54.542032 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:54.533728   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.534374   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.535968   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.536411   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:54.538038   15414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:54.542052 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:54.542063 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:54.614689 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:54.614710 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:58:57.146377 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:58:57.156881 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:58:57.156942 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:58:57.181785 1176706 cri.go:89] found id: ""
	I1217 00:58:57.181800 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.181808 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:58:57.181813 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:58:57.181869 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:58:57.208021 1176706 cri.go:89] found id: ""
	I1217 00:58:57.208046 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.208059 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:58:57.208065 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:58:57.208133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:58:57.235483 1176706 cri.go:89] found id: ""
	I1217 00:58:57.235497 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.235505 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:58:57.235510 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:58:57.235569 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:58:57.269950 1176706 cri.go:89] found id: ""
	I1217 00:58:57.269972 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.269980 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:58:57.269986 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:58:57.270063 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:58:57.296896 1176706 cri.go:89] found id: ""
	I1217 00:58:57.296911 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.296918 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:58:57.296924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:58:57.296983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:58:57.325435 1176706 cri.go:89] found id: ""
	I1217 00:58:57.325452 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.325462 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:58:57.325468 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:58:57.325526 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:58:57.350942 1176706 cri.go:89] found id: ""
	I1217 00:58:57.350957 1176706 logs.go:282] 0 containers: []
	W1217 00:58:57.350965 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:58:57.350973 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:58:57.350982 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:58:57.416866 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:58:57.416886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:58:57.434717 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:58:57.434736 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:58:57.499393 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:58:57.490656   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.491229   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.492957   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.493722   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:58:57.495439   15519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:58:57.499403 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:58:57.499414 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:58:57.567648 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:58:57.567668 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:00.097029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:00.143893 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:00.143993 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:00.260647 1176706 cri.go:89] found id: ""
	I1217 00:59:00.262402 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.262438 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:00.262449 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:00.262564 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:00.330718 1176706 cri.go:89] found id: ""
	I1217 00:59:00.330734 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.330745 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:00.330751 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:00.330862 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:00.372603 1176706 cri.go:89] found id: ""
	I1217 00:59:00.372630 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.372638 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:00.372645 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:00.372721 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:00.403443 1176706 cri.go:89] found id: ""
	I1217 00:59:00.403469 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.403478 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:00.403484 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:00.403558 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:00.432235 1176706 cri.go:89] found id: ""
	I1217 00:59:00.432260 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.432268 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:00.432274 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:00.432341 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:00.464475 1176706 cri.go:89] found id: ""
	I1217 00:59:00.464489 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.464496 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:00.464501 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:00.464563 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:00.494126 1176706 cri.go:89] found id: ""
	I1217 00:59:00.494156 1176706 logs.go:282] 0 containers: []
	W1217 00:59:00.494164 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:00.494172 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:00.494182 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:00.564811 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:00.564831 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:00.582720 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:00.582738 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:00.643909 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:00.635659   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.636320   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.637717   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.638401   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:00.640051   15627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:00.643921 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:00.643931 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:00.716875 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:00.716895 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.245660 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:03.256968 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:03.257032 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:03.283953 1176706 cri.go:89] found id: ""
	I1217 00:59:03.283968 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.283976 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:03.283981 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:03.284041 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:03.313015 1176706 cri.go:89] found id: ""
	I1217 00:59:03.313029 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.313036 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:03.313041 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:03.313098 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:03.341219 1176706 cri.go:89] found id: ""
	I1217 00:59:03.341233 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.341241 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:03.341246 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:03.341304 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:03.366417 1176706 cri.go:89] found id: ""
	I1217 00:59:03.366430 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.366437 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:03.366443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:03.366499 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:03.395548 1176706 cri.go:89] found id: ""
	I1217 00:59:03.395561 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.395568 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:03.395574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:03.395631 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:03.425673 1176706 cri.go:89] found id: ""
	I1217 00:59:03.425687 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.425694 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:03.425699 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:03.425758 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:03.452761 1176706 cri.go:89] found id: ""
	I1217 00:59:03.452775 1176706 logs.go:282] 0 containers: []
	W1217 00:59:03.452782 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:03.452790 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:03.452813 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:03.470985 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:03.471004 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:03.539585 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:03.531310   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.531994   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.533722   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.534324   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:03.535721   15728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:03.539606 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:03.539617 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:03.608766 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:03.608787 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:03.641472 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:03.641487 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.214627 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:06.225029 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:06.225095 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:06.262898 1176706 cri.go:89] found id: ""
	I1217 00:59:06.262912 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.262919 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:06.262924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:06.262979 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:06.295811 1176706 cri.go:89] found id: ""
	I1217 00:59:06.295825 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.295832 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:06.295837 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:06.295900 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:06.325305 1176706 cri.go:89] found id: ""
	I1217 00:59:06.325319 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.325326 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:06.325331 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:06.325388 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:06.350976 1176706 cri.go:89] found id: ""
	I1217 00:59:06.350990 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.350997 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:06.351002 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:06.351061 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:06.381013 1176706 cri.go:89] found id: ""
	I1217 00:59:06.381027 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.381034 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:06.381040 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:06.381156 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:06.407543 1176706 cri.go:89] found id: ""
	I1217 00:59:06.407556 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.407564 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:06.407569 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:06.407627 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:06.435419 1176706 cri.go:89] found id: ""
	I1217 00:59:06.435433 1176706 logs.go:282] 0 containers: []
	W1217 00:59:06.435440 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:06.435448 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:06.435460 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:06.472071 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:06.472098 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:06.540915 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:06.540936 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:06.558800 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:06.558816 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:06.626144 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:06.617136   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.618091   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.619874   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.620546   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:06.622215   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:06.626156 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:06.626167 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.199032 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:09.210273 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:09.210345 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:09.238454 1176706 cri.go:89] found id: ""
	I1217 00:59:09.238468 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.238475 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:09.238481 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:09.238539 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:09.283355 1176706 cri.go:89] found id: ""
	I1217 00:59:09.283369 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.283377 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:09.283382 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:09.283452 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:09.322894 1176706 cri.go:89] found id: ""
	I1217 00:59:09.322909 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.322917 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:09.322924 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:09.322983 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:09.349261 1176706 cri.go:89] found id: ""
	I1217 00:59:09.349275 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.349282 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:09.349290 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:09.349348 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:09.375365 1176706 cri.go:89] found id: ""
	I1217 00:59:09.375381 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.375390 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:09.375395 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:09.375458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:09.404751 1176706 cri.go:89] found id: ""
	I1217 00:59:09.404765 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.404773 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:09.404778 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:09.404840 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:09.430184 1176706 cri.go:89] found id: ""
	I1217 00:59:09.430198 1176706 logs.go:282] 0 containers: []
	W1217 00:59:09.430206 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:09.430214 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:09.430224 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:09.496857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:09.496876 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:09.515406 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:09.515423 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:09.581087 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:09.572943   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.573735   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575320   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.575649   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:09.577138   15941 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:09.581098 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:09.581109 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:09.650268 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:09.650288 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.181362 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:12.192867 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:12.192928 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:12.219737 1176706 cri.go:89] found id: ""
	I1217 00:59:12.219750 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.219757 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:12.219763 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:12.219821 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:12.245063 1176706 cri.go:89] found id: ""
	I1217 00:59:12.245084 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.245091 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:12.245097 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:12.245165 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:12.272131 1176706 cri.go:89] found id: ""
	I1217 00:59:12.272145 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.272152 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:12.272157 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:12.272216 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:12.303997 1176706 cri.go:89] found id: ""
	I1217 00:59:12.304011 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.304018 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:12.304024 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:12.304085 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:12.333611 1176706 cri.go:89] found id: ""
	I1217 00:59:12.333624 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.333632 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:12.333637 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:12.333693 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:12.363773 1176706 cri.go:89] found id: ""
	I1217 00:59:12.363789 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.363797 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:12.363802 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:12.363863 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:12.389846 1176706 cri.go:89] found id: ""
	I1217 00:59:12.389861 1176706 logs.go:282] 0 containers: []
	W1217 00:59:12.389868 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:12.389875 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:12.389886 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:12.407604 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:12.407621 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:12.473182 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:12.464893   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.465529   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467107   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.467655   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:12.469235   16044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:12.473192 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:12.473203 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:12.543348 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:12.543369 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:12.577767 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:12.577783 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.146065 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:15.160131 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:15.160197 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:15.190612 1176706 cri.go:89] found id: ""
	I1217 00:59:15.190626 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.190634 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:15.190639 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:15.190699 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:15.218099 1176706 cri.go:89] found id: ""
	I1217 00:59:15.218113 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.218121 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:15.218126 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:15.218184 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:15.248835 1176706 cri.go:89] found id: ""
	I1217 00:59:15.248848 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.248856 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:15.248861 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:15.248918 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:15.285228 1176706 cri.go:89] found id: ""
	I1217 00:59:15.285242 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.285250 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:15.285256 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:15.285342 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:15.319666 1176706 cri.go:89] found id: ""
	I1217 00:59:15.319684 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.319692 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:15.319697 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:15.319762 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:15.349950 1176706 cri.go:89] found id: ""
	I1217 00:59:15.349964 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.349971 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:15.349985 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:15.350057 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:15.377523 1176706 cri.go:89] found id: ""
	I1217 00:59:15.377539 1176706 logs.go:282] 0 containers: []
	W1217 00:59:15.377546 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:15.377553 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:15.377563 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:15.444971 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:15.444997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:15.463350 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:15.463367 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:15.527808 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:15.519384   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.520209   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.521696   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.522187   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:15.523778   16151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:15.527819 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:15.527829 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:15.596798 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:15.596819 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.130677 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:18.141262 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:18.141323 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:18.169116 1176706 cri.go:89] found id: ""
	I1217 00:59:18.169130 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.169138 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:18.169144 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:18.169213 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:18.196282 1176706 cri.go:89] found id: ""
	I1217 00:59:18.196296 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.196303 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:18.196308 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:18.196374 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:18.221983 1176706 cri.go:89] found id: ""
	I1217 00:59:18.222001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.222008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:18.222014 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:18.222104 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:18.253664 1176706 cri.go:89] found id: ""
	I1217 00:59:18.253678 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.253695 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:18.253701 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:18.253759 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:18.285902 1176706 cri.go:89] found id: ""
	I1217 00:59:18.285926 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.285935 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:18.285940 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:18.286012 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:18.314726 1176706 cri.go:89] found id: ""
	I1217 00:59:18.314740 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.314747 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:18.314762 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:18.314817 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:18.344853 1176706 cri.go:89] found id: ""
	I1217 00:59:18.344867 1176706 logs.go:282] 0 containers: []
	W1217 00:59:18.344875 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:18.344882 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:18.344904 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:18.414538 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:18.414559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:18.447095 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:18.447111 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:18.512991 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:18.513011 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:18.533994 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:18.534020 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:18.598850 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:18.590741   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.591231   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.592811   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.593221   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:18.594712   16270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.100519 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:21.110642 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:21.110704 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:21.135662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.135677 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.135684 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:21.135690 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:21.135749 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:21.165495 1176706 cri.go:89] found id: ""
	I1217 00:59:21.165508 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.165515 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:21.165522 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:21.165581 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:21.190194 1176706 cri.go:89] found id: ""
	I1217 00:59:21.190216 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.190224 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:21.190229 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:21.190286 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:21.215635 1176706 cri.go:89] found id: ""
	I1217 00:59:21.215658 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.215668 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:21.215674 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:21.215741 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:21.240901 1176706 cri.go:89] found id: ""
	I1217 00:59:21.240915 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.240922 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:21.240928 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:21.240985 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:21.282662 1176706 cri.go:89] found id: ""
	I1217 00:59:21.282676 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.282683 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:21.282689 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:21.282747 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:21.318909 1176706 cri.go:89] found id: ""
	I1217 00:59:21.318937 1176706 logs.go:282] 0 containers: []
	W1217 00:59:21.318946 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:21.318955 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:21.318981 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:21.389438 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:21.389459 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:21.407933 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:21.407951 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:21.470948 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:21.462633   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.463089   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.464692   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.465026   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:21.466608   16362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:21.470958 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:21.470970 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:21.543202 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:21.543223 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:24.074213 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:24.084903 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:24.084967 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:24.111192 1176706 cri.go:89] found id: ""
	I1217 00:59:24.111207 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.111214 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:24.111221 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:24.111280 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:24.137550 1176706 cri.go:89] found id: ""
	I1217 00:59:24.137564 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.137572 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:24.137577 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:24.137638 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:24.163576 1176706 cri.go:89] found id: ""
	I1217 00:59:24.163590 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.163598 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:24.163603 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:24.163661 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:24.191365 1176706 cri.go:89] found id: ""
	I1217 00:59:24.191379 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.191386 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:24.191391 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:24.191451 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:24.218021 1176706 cri.go:89] found id: ""
	I1217 00:59:24.218036 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.218043 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:24.218048 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:24.218109 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:24.243066 1176706 cri.go:89] found id: ""
	I1217 00:59:24.243079 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.243086 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:24.243092 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:24.243150 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:24.273424 1176706 cri.go:89] found id: ""
	I1217 00:59:24.273438 1176706 logs.go:282] 0 containers: []
	W1217 00:59:24.273446 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:24.273453 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:24.273468 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:24.352524 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:24.352545 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:24.370425 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:24.370445 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:24.435871 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:24.428060   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.428653   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430114   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.430489   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:24.431924   16467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:24.435881 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:24.435896 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:24.504929 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:24.504949 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.033266 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:27.043460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:27.043521 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:27.068665 1176706 cri.go:89] found id: ""
	I1217 00:59:27.068679 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.068686 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:27.068698 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:27.068754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:27.094007 1176706 cri.go:89] found id: ""
	I1217 00:59:27.094021 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.094028 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:27.094033 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:27.094092 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:27.118910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.118923 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.118931 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:27.118936 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:27.118994 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:27.147303 1176706 cri.go:89] found id: ""
	I1217 00:59:27.147317 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.147324 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:27.147330 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:27.147386 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:27.172343 1176706 cri.go:89] found id: ""
	I1217 00:59:27.172357 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.172365 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:27.172370 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:27.172458 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:27.197910 1176706 cri.go:89] found id: ""
	I1217 00:59:27.197924 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.197932 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:27.197938 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:27.198001 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:27.227576 1176706 cri.go:89] found id: ""
	I1217 00:59:27.227591 1176706 logs.go:282] 0 containers: []
	W1217 00:59:27.227598 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:27.227606 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:27.227618 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:27.311005 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:27.303219   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.304018   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305569   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.305888   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:27.307299   16564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:27.311016 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:27.311026 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:27.382732 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:27.382752 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:27.415820 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:27.415836 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:27.482903 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:27.482926 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.004621 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:30.030664 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:30.030745 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:30.081469 1176706 cri.go:89] found id: ""
	I1217 00:59:30.081485 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.081493 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:30.081499 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:30.081566 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:30.113916 1176706 cri.go:89] found id: ""
	I1217 00:59:30.113931 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.113939 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:30.113946 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:30.114011 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:30.145424 1176706 cri.go:89] found id: ""
	I1217 00:59:30.145439 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.145447 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:30.145453 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:30.145519 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:30.172979 1176706 cri.go:89] found id: ""
	I1217 00:59:30.172993 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.173000 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:30.173006 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:30.173068 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:30.203666 1176706 cri.go:89] found id: ""
	I1217 00:59:30.203680 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.203688 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:30.203693 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:30.203754 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:30.230252 1176706 cri.go:89] found id: ""
	I1217 00:59:30.230266 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.230274 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:30.230280 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:30.230346 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:30.263265 1176706 cri.go:89] found id: ""
	I1217 00:59:30.263288 1176706 logs.go:282] 0 containers: []
	W1217 00:59:30.263297 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:30.263305 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:30.263317 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:30.285817 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:30.285833 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:30.357587 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:30.349769   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.350140   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.351767   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.352096   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:30.353574   16679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:30.357597 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:30.357609 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:30.426496 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:30.426518 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:30.455371 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:30.455387 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:33.025588 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:33.037063 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:33.037133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:33.066495 1176706 cri.go:89] found id: ""
	I1217 00:59:33.066510 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.066518 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:33.066531 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:33.066593 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:33.094203 1176706 cri.go:89] found id: ""
	I1217 00:59:33.094218 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.094225 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:33.094230 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:33.094289 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:33.121048 1176706 cri.go:89] found id: ""
	I1217 00:59:33.121062 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.121070 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:33.121076 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:33.121137 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:33.148530 1176706 cri.go:89] found id: ""
	I1217 00:59:33.148559 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.148568 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:33.148574 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:33.148647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:33.175802 1176706 cri.go:89] found id: ""
	I1217 00:59:33.175816 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.175823 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:33.175829 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:33.175892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:33.206535 1176706 cri.go:89] found id: ""
	I1217 00:59:33.206548 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.206556 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:33.206562 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:33.206623 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:33.236039 1176706 cri.go:89] found id: ""
	I1217 00:59:33.236052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:33.236060 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:33.236068 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:33.236078 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:33.255180 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:33.255197 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:33.339098 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:33.331013   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.331426   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333087   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.333564   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:33.334652   16787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:33.339108 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:33.339121 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:33.412971 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:33.412997 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:33.441676 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:33.441694 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.008647 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:36.020237 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:36.020301 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:36.052600 1176706 cri.go:89] found id: ""
	I1217 00:59:36.052616 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.052623 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:36.052629 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:36.052692 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:36.081744 1176706 cri.go:89] found id: ""
	I1217 00:59:36.081759 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.081768 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:36.081773 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:36.081841 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:36.109987 1176706 cri.go:89] found id: ""
	I1217 00:59:36.110001 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.110008 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:36.110013 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:36.110077 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:36.134954 1176706 cri.go:89] found id: ""
	I1217 00:59:36.134967 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.134975 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:36.134980 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:36.135037 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:36.159862 1176706 cri.go:89] found id: ""
	I1217 00:59:36.159876 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.159884 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:36.159889 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:36.159947 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:36.187809 1176706 cri.go:89] found id: ""
	I1217 00:59:36.187822 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.187829 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:36.187835 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:36.187904 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:36.214242 1176706 cri.go:89] found id: ""
	I1217 00:59:36.214257 1176706 logs.go:282] 0 containers: []
	W1217 00:59:36.214264 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:36.214272 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:36.214283 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:36.286225 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:36.286244 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:36.305628 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:36.305646 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:36.371158 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:36.362976   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.363378   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365174   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.365540   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:36.367137   16894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:36.371170 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:36.371181 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:36.439045 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:36.439065 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:38.969106 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:38.979363 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:38.979424 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:39.007795 1176706 cri.go:89] found id: ""
	I1217 00:59:39.007810 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.007818 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:39.007824 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:39.007888 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:39.034152 1176706 cri.go:89] found id: ""
	I1217 00:59:39.034166 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.034173 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:39.034179 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:39.034238 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:39.059914 1176706 cri.go:89] found id: ""
	I1217 00:59:39.059928 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.059935 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:39.059941 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:39.060002 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:39.085320 1176706 cri.go:89] found id: ""
	I1217 00:59:39.085334 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.085341 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:39.085349 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:39.085405 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:39.110285 1176706 cri.go:89] found id: ""
	I1217 00:59:39.110298 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.110306 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:39.110311 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:39.110372 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:39.135035 1176706 cri.go:89] found id: ""
	I1217 00:59:39.135058 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.135066 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:39.135072 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:39.135139 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:39.159817 1176706 cri.go:89] found id: ""
	I1217 00:59:39.159830 1176706 logs.go:282] 0 containers: []
	W1217 00:59:39.159848 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:39.159857 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:39.159872 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:39.177791 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:39.177809 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:39.249533 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:39.240840   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.241421   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243206   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.243957   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:39.245546   16992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:39.249543 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:39.249552 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:39.325557 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:39.325577 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:39.360066 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:39.360085 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:41.928404 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:41.938632 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:41.938696 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:41.964031 1176706 cri.go:89] found id: ""
	I1217 00:59:41.964052 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.964059 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:41.964064 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:41.964122 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:41.993062 1176706 cri.go:89] found id: ""
	I1217 00:59:41.993076 1176706 logs.go:282] 0 containers: []
	W1217 00:59:41.993084 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:41.993089 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:41.993160 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:42.033652 1176706 cri.go:89] found id: ""
	I1217 00:59:42.033667 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.033676 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:42.033681 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:42.033746 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:42.060629 1176706 cri.go:89] found id: ""
	I1217 00:59:42.060645 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.060653 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:42.060659 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:42.060722 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:42.092817 1176706 cri.go:89] found id: ""
	I1217 00:59:42.092845 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.092853 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:42.092868 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:42.092941 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:42.136486 1176706 cri.go:89] found id: ""
	I1217 00:59:42.136506 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.136515 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:42.136521 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:42.136592 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:42.171937 1176706 cri.go:89] found id: ""
	I1217 00:59:42.171952 1176706 logs.go:282] 0 containers: []
	W1217 00:59:42.171959 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:42.171967 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:42.171979 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:42.262695 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:42.242670   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.243865   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.245324   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.246556   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:42.251962   17092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:42.262707 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:42.262718 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:42.339199 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:42.339220 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:42.372997 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:42.373025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:42.446036 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:42.446055 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:44.965013 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:44.976094 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:44.976161 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:45.015164 1176706 cri.go:89] found id: ""
	I1217 00:59:45.015181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.015189 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:45.015195 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:45.015272 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:45.071613 1176706 cri.go:89] found id: ""
	I1217 00:59:45.071635 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.071643 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:45.071649 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:45.071715 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:45.119793 1176706 cri.go:89] found id: ""
	I1217 00:59:45.119818 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.119826 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:45.119839 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:45.119914 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:45.151783 1176706 cri.go:89] found id: ""
	I1217 00:59:45.151800 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.151808 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:45.151814 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:45.151892 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:45.215691 1176706 cri.go:89] found id: ""
	I1217 00:59:45.215708 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.215717 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:45.215723 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:45.215788 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:45.307588 1176706 cri.go:89] found id: ""
	I1217 00:59:45.307603 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.307612 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:45.307617 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:45.307686 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:45.338241 1176706 cri.go:89] found id: ""
	I1217 00:59:45.338255 1176706 logs.go:282] 0 containers: []
	W1217 00:59:45.338262 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:45.338270 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:45.338281 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:45.369988 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:45.370005 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:45.441693 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:45.441715 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:45.461548 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:45.461567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:45.548353 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:45.539536   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.540064   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.541749   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.542176   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:45.543894   17219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:45.548363 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:45.548374 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.120029 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:48.130460 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:48.130527 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:48.158049 1176706 cri.go:89] found id: ""
	I1217 00:59:48.158063 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.158070 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:48.158075 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:48.158133 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:48.183768 1176706 cri.go:89] found id: ""
	I1217 00:59:48.183782 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.183790 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:48.183795 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:48.183853 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:48.209858 1176706 cri.go:89] found id: ""
	I1217 00:59:48.209883 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.209891 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:48.209897 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:48.209969 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:48.239433 1176706 cri.go:89] found id: ""
	I1217 00:59:48.239447 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.239464 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:48.239470 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:48.239546 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:48.283289 1176706 cri.go:89] found id: ""
	I1217 00:59:48.283312 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.283320 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:48.283325 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:48.283401 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:48.314402 1176706 cri.go:89] found id: ""
	I1217 00:59:48.314429 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.314437 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:48.314443 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:48.314511 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:48.343692 1176706 cri.go:89] found id: ""
	I1217 00:59:48.343706 1176706 logs.go:282] 0 containers: []
	W1217 00:59:48.343727 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:48.343735 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:48.343745 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:48.362542 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:48.362560 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:48.427994 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:48.418765   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.419590   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.421393   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.422047   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:48.423826   17309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:48.428004 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:48.428016 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:48.499539 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:48.499559 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:48.531009 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:48.531025 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.098220 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:51.109265 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:59:51.109331 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:59:51.136197 1176706 cri.go:89] found id: ""
	I1217 00:59:51.136213 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.136221 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:59:51.136227 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:59:51.136287 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:59:51.163078 1176706 cri.go:89] found id: ""
	I1217 00:59:51.163092 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.163100 1176706 logs.go:284] No container was found matching "etcd"
	I1217 00:59:51.163105 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:59:51.163172 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:59:51.191839 1176706 cri.go:89] found id: ""
	I1217 00:59:51.191853 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.191861 1176706 logs.go:284] No container was found matching "coredns"
	I1217 00:59:51.191866 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:59:51.191949 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:59:51.218098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.218116 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.218124 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:59:51.218130 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:59:51.218211 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:59:51.243098 1176706 cri.go:89] found id: ""
	I1217 00:59:51.243112 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.243120 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:59:51.243125 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:59:51.243191 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:59:51.271566 1176706 cri.go:89] found id: ""
	I1217 00:59:51.271579 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.271586 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:59:51.271591 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:59:51.271647 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:59:51.305158 1176706 cri.go:89] found id: ""
	I1217 00:59:51.305181 1176706 logs.go:282] 0 containers: []
	W1217 00:59:51.305187 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 00:59:51.305196 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 00:59:51.305207 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:59:51.376352 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 00:59:51.376373 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:59:51.394410 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:59:51.394427 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:59:51.459231 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:59:51.451110   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.451608   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453348   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.453713   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:51.455259   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:59:51.459240 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 00:59:51.459251 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 00:59:51.528231 1176706 logs.go:123] Gathering logs for container status ...
	I1217 00:59:51.528252 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:59:54.058312 1176706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:59:54.069031 1176706 kubeadm.go:602] duration metric: took 4m2.785263609s to restartPrimaryControlPlane
	W1217 00:59:54.069095 1176706 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:59:54.069181 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 00:59:54.486154 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:59:54.499356 1176706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:59:54.507725 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:59:54.507779 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:59:54.515997 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:59:54.516007 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 00:59:54.516064 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:59:54.524157 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:59:54.524213 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:59:54.532265 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:59:54.540638 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:59:54.540707 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:59:54.548269 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.556326 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:59:54.556388 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:59:54.564545 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:59:54.572682 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:59:54.572738 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:59:54.580611 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:59:54.700281 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 00:59:54.700747 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:59:54.763643 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:03:56.152758 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:03:56.152795 1176706 kubeadm.go:319] 
	I1217 01:03:56.152869 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:03:56.156728 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.156797 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.156958 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.157014 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.157073 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.157118 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.157197 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.157253 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.157300 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.157352 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.157400 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.157453 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.157508 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.157553 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.157624 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.157727 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.157824 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.157884 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.160971 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.161055 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.161118 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.161193 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.161252 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.161327 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.161379 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.161441 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.161501 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.161574 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.161645 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.161681 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.161741 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:56.161790 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:56.161845 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:56.161896 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:56.161957 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:56.162010 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:56.162092 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:56.162157 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:56.165021 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:56.165147 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:56.165231 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:56.165300 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:56.165418 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:56.165512 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:56.165614 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:56.165696 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:56.165733 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:56.165861 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:56.165963 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:03:56.166026 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000240228s
	I1217 01:03:56.166028 1176706 kubeadm.go:319] 
	I1217 01:03:56.166083 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:03:56.166114 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:03:56.166215 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:03:56.166218 1176706 kubeadm.go:319] 
	I1217 01:03:56.166320 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:03:56.166351 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:03:56.166380 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1217 01:03:56.166487 1176706 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000240228s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:03:56.166580 1176706 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 01:03:56.166903 1176706 kubeadm.go:319] 
	I1217 01:03:56.586040 1176706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:03:56.599481 1176706 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:03:56.599536 1176706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:03:56.607687 1176706 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:03:56.607697 1176706 kubeadm.go:158] found existing configuration files:
	
	I1217 01:03:56.607750 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 01:03:56.615588 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:03:56.615644 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:03:56.623820 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 01:03:56.631817 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:03:56.631875 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:03:56.639771 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.647723 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:03:56.647784 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:03:56.655274 1176706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 01:03:56.662953 1176706 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:03:56.663009 1176706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:03:56.671031 1176706 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:03:56.709331 1176706 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:03:56.709382 1176706 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:03:56.784528 1176706 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:03:56.784593 1176706 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:03:56.784627 1176706 kubeadm.go:319] OS: Linux
	I1217 01:03:56.784671 1176706 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:03:56.784718 1176706 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:03:56.784764 1176706 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:03:56.784811 1176706 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:03:56.784857 1176706 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:03:56.784907 1176706 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:03:56.784950 1176706 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:03:56.784997 1176706 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:03:56.785046 1176706 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:03:56.852730 1176706 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:03:56.852846 1176706 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:03:56.852941 1176706 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:03:56.864882 1176706 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:03:56.870169 1176706 out.go:252]   - Generating certificates and keys ...
	I1217 01:03:56.870260 1176706 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:03:56.870331 1176706 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:03:56.870414 1176706 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:03:56.870480 1176706 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:03:56.870560 1176706 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:03:56.870623 1176706 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:03:56.870698 1176706 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:03:56.870772 1176706 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:03:56.870857 1176706 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:03:56.870939 1176706 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:03:56.870985 1176706 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:03:56.871053 1176706 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:03:57.081118 1176706 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:03:57.308024 1176706 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:03:57.795688 1176706 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:03:58.747783 1176706 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:03:59.056308 1176706 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:03:59.056908 1176706 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:03:59.061460 1176706 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:03:59.064667 1176706 out.go:252]   - Booting up control plane ...
	I1217 01:03:59.064766 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:03:59.064843 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:03:59.064909 1176706 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:03:59.079437 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:03:59.079539 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:03:59.087425 1176706 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:03:59.087990 1176706 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:03:59.088228 1176706 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:03:59.232706 1176706 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:03:59.232823 1176706 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:07:59.232882 1176706 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288911s
	I1217 01:07:59.232905 1176706 kubeadm.go:319] 
	I1217 01:07:59.232961 1176706 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:07:59.232994 1176706 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:07:59.233119 1176706 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:07:59.233124 1176706 kubeadm.go:319] 
	I1217 01:07:59.233227 1176706 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:07:59.233261 1176706 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:07:59.233291 1176706 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:07:59.233294 1176706 kubeadm.go:319] 
	I1217 01:07:59.237945 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:07:59.238359 1176706 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:07:59.238466 1176706 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:07:59.238699 1176706 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:07:59.238704 1176706 kubeadm.go:319] 
	I1217 01:07:59.238771 1176706 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:07:59.238833 1176706 kubeadm.go:403] duration metric: took 12m7.995613678s to StartCluster
	I1217 01:07:59.238862 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:07:59.238924 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:07:59.265092 1176706 cri.go:89] found id: ""
	I1217 01:07:59.265110 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.265118 1176706 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:07:59.265124 1176706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:07:59.265190 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:07:59.289869 1176706 cri.go:89] found id: ""
	I1217 01:07:59.289884 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.289891 1176706 logs.go:284] No container was found matching "etcd"
	I1217 01:07:59.289896 1176706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:07:59.289954 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:07:59.315177 1176706 cri.go:89] found id: ""
	I1217 01:07:59.315192 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.315200 1176706 logs.go:284] No container was found matching "coredns"
	I1217 01:07:59.315206 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:07:59.315267 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:07:59.343402 1176706 cri.go:89] found id: ""
	I1217 01:07:59.343422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.343429 1176706 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:07:59.343435 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:07:59.343492 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:07:59.369351 1176706 cri.go:89] found id: ""
	I1217 01:07:59.369367 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.369375 1176706 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:07:59.369381 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:07:59.369446 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:07:59.395407 1176706 cri.go:89] found id: ""
	I1217 01:07:59.395422 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.395430 1176706 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:07:59.395436 1176706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:07:59.395497 1176706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:07:59.425527 1176706 cri.go:89] found id: ""
	I1217 01:07:59.425542 1176706 logs.go:282] 0 containers: []
	W1217 01:07:59.425549 1176706 logs.go:284] No container was found matching "kindnet"
	I1217 01:07:59.425557 1176706 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:07:59.425567 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:07:59.496396 1176706 logs.go:123] Gathering logs for container status ...
	I1217 01:07:59.496422 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:07:59.529365 1176706 logs.go:123] Gathering logs for kubelet ...
	I1217 01:07:59.529381 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:07:59.607059 1176706 logs.go:123] Gathering logs for dmesg ...
	I1217 01:07:59.607079 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:07:59.625460 1176706 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:07:59.625476 1176706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:07:59.694111 1176706 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:07:59.685961   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.686564   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688104   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.688792   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:07:59.689955   21253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 01:07:59.694128 1176706 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:07:59.694160 1176706 out.go:285] * 
	W1217 01:07:59.696578 1176706 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.696718 1176706 out.go:285] * 
	W1217 01:07:59.699147 1176706 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:07:59.705064 1176706 out.go:203] 
	W1217 01:07:59.708024 1176706 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288911s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:07:59.708074 1176706 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:07:59.708093 1176706 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:07:59.711386 1176706 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.856470274Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=ef116d89-326a-4264-be1a-c1a1c61f856f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.85716241Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=48ae23b1-9237-4abe-8586-a22789c1855d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.857752633Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=3cdbc308-65b6-45fa-9f9e-f10e79119ca3 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858320825Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=3d72515c-27e8-4599-9a3a-55c1e786e2d4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.858852571Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df55df6f-24f3-440d-9630-435b19250644 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859434761Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=76977bf3-dbf1-4740-ab7e-261b44d6cbc4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:03:56 functional-389537 crio[10035]: time="2025-12-17T01:03:56.859913322Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3a88b64b-7c2e-4efa-a683-a7222714b1da name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682372585Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68256814Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.68261275Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.682675452Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=119812f2-0790-4d84-a2da-b0cdb94ae1a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711610422Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711759996Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.711798871Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=f0f4adc4-28ab-455c-be9a-296545d86aa9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739279084Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739445118Z" level=info msg="Image localhost/kicbase/echo-server:functional-389537 not found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:08 functional-389537 crio[10035]: time="2025-12-17T01:08:08.739495176Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-389537 found" id=1cb99eb1-b367-46a6-ba61-6ad348f59b2a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732782966Z" level=info msg="Checking image status: kicbase/echo-server:functional-389537" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.732960388Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733031024Z" level=info msg="Image kicbase/echo-server:functional-389537 not found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.733098123Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-389537 found" id=bb3f22b8-f842-4e5b-aa80-369aae7a5428 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765602567Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-389537" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765759674Z" level=info msg="Image docker.io/kicbase/echo-server:functional-389537 not found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.765805293Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-389537 found" id=6b261b23-592c-4529-93f4-2e6f05b0921c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:08:11 functional-389537 crio[10035]: time="2025-12-17T01:08:11.806741123Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-389537" id=54276271-8e2f-42ec-a439-ea95344609a5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:08:14.461704   22238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:14.462301   22238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:14.463754   22238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:14.464159   22238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 01:08:14.465604   22238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 23:38] overlayfs: idmapped layers are currently not supported
	[Dec16 23:49] overlayfs: idmapped layers are currently not supported
	[Dec16 23:51] overlayfs: idmapped layers are currently not supported
	[Dec16 23:52] overlayfs: idmapped layers are currently not supported
	[  +3.070921] overlayfs: idmapped layers are currently not supported
	[Dec16 23:53] overlayfs: idmapped layers are currently not supported
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:08:14 up  6:50,  0 user,  load average: 0.68, 0.32, 0.48
	Linux functional-389537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:08:11 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:12 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2137.
	Dec 17 01:08:12 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:12 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:12 functional-389537 kubelet[22080]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:12 functional-389537 kubelet[22080]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:12 functional-389537 kubelet[22080]: E1217 01:08:12.564725   22080 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:12 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:12 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:13 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2138.
	Dec 17 01:08:13 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:13 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:13 functional-389537 kubelet[22133]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:13 functional-389537 kubelet[22133]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:13 functional-389537 kubelet[22133]: E1217 01:08:13.321420   22133 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:13 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:13 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:08:13 functional-389537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2139.
	Dec 17 01:08:13 functional-389537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:13 functional-389537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:08:14 functional-389537 kubelet[22153]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:14 functional-389537 kubelet[22153]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 01:08:14 functional-389537 kubelet[22153]: E1217 01:08:14.068168   22153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:08:14 functional-389537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:08:14 functional-389537 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389537 -n functional-389537: exit status 2 (353.980071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-389537" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1217 01:08:07.012844 1189442 out.go:360] Setting OutFile to fd 1 ...
I1217 01:08:07.013052 1189442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:08:07.013075 1189442 out.go:374] Setting ErrFile to fd 2...
I1217 01:08:07.013099 1189442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:08:07.013387 1189442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:08:07.013672 1189442 mustload.go:66] Loading cluster: functional-389537
I1217 01:08:07.014147 1189442 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:08:07.014740 1189442 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:08:07.046080 1189442 host.go:66] Checking if "functional-389537" exists ...
I1217 01:08:07.046394 1189442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 01:08:07.154834 1189442 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:08:07.143373978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 01:08:07.154942 1189442 api_server.go:166] Checking apiserver status ...
I1217 01:08:07.155002 1189442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 01:08:07.155040 1189442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:08:07.213361 1189442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
W1217 01:08:07.352938 1189442 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 01:08:07.357153 1189442 out.go:179] * The control-plane node functional-389537 apiserver is not running: (state=Stopped)
I1217 01:08:07.360849 1189442 out.go:179]   To start a cluster, run: "minikube start -p functional-389537"

                                                
                                                
stdout: * The control-plane node functional-389537 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-389537"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1189443: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-389537 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-389537 apply -f testdata/testsvc.yaml: exit status 1 (119.742544ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-389537 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (120.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.99.120.4": Temporary Error: Get "http://10.99.120.4": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-389537 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-389537 get svc nginx-svc: exit status 1 (63.531245ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-389537 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (120.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765933699495920901" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765933699495920901" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765933699495920901" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001/test-1765933699495920901
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.386156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 01:08:19.824586 1136597 retry.go:31] will retry after 721.245789ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 01:08 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 01:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 01:08 test-1765933699495920901
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh cat /mount-9p/test-1765933699495920901
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-389537 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-389537 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (61.675412ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-389537 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (277.320214ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=40639)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 17 01:08 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 17 01:08 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 17 01:08 test-1765933699495920901
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-389537 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40639
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001:/mount-9p --alsologtostderr -v=1] stderr:
I1217 01:08:19.552347 1191659 out.go:360] Setting OutFile to fd 1 ...
I1217 01:08:19.552742 1191659 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:08:19.552752 1191659 out.go:374] Setting ErrFile to fd 2...
I1217 01:08:19.552756 1191659 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:08:19.553020 1191659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:08:19.553286 1191659 mustload.go:66] Loading cluster: functional-389537
I1217 01:08:19.553662 1191659 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:08:19.554197 1191659 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:08:19.574908 1191659 host.go:66] Checking if "functional-389537" exists ...
I1217 01:08:19.575217 1191659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 01:08:19.670025 1191659 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:08:19.656882621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 01:08:19.670186 1191659 cli_runner.go:164] Run: docker network inspect functional-389537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 01:08:19.695241 1191659 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001 into VM as /mount-9p ...
I1217 01:08:19.698476 1191659 out.go:179]   - Mount type:   9p
I1217 01:08:19.701513 1191659 out.go:179]   - User ID:      docker
I1217 01:08:19.704550 1191659 out.go:179]   - Group ID:     docker
I1217 01:08:19.707556 1191659 out.go:179]   - Version:      9p2000.L
I1217 01:08:19.710575 1191659 out.go:179]   - Message Size: 262144
I1217 01:08:19.713527 1191659 out.go:179]   - Options:      map[]
I1217 01:08:19.716974 1191659 out.go:179]   - Bind Address: 192.168.49.1:40639
I1217 01:08:19.719790 1191659 out.go:179] * Userspace file server: 
I1217 01:08:19.720171 1191659 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 01:08:19.720285 1191659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:08:19.739244 1191659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
I1217 01:08:19.835210 1191659 mount.go:180] unmount for /mount-9p ran successfully
I1217 01:08:19.835238 1191659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1217 01:08:19.844345 1191659 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40639,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1217 01:08:19.855103 1191659 main.go:127] stdlog: ufs.go:141 connected
I1217 01:08:19.855273 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tversion tag 65535 msize 262144 version '9P2000.L'
I1217 01:08:19.855315 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rversion tag 65535 msize 262144 version '9P2000'
I1217 01:08:19.855565 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1217 01:08:19.855623 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rattach tag 0 aqid (c9d632 29d9d1a0 'd')
I1217 01:08:19.856286 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 0
I1217 01:08:19.856355 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d632 29d9d1a0 'd') m d775 at 0 mt 1765933699 l 4096 t 0 d 0 ext )
I1217 01:08:19.860407 1191659 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/.mount-process: {Name:mk25bcdb9f0a5d42c1eafaecef4361fa079b1413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 01:08:19.860702 1191659 mount.go:105] mount successful: ""
I1217 01:08:19.864249 1191659 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3202882394/001 to /mount-9p
I1217 01:08:19.867102 1191659 out.go:203] 
I1217 01:08:19.870039 1191659 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1217 01:08:21.069228 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 0
I1217 01:08:21.069312 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d632 29d9d1a0 'd') m d775 at 0 mt 1765933699 l 4096 t 0 d 0 ext )
I1217 01:08:21.069691 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 1 
I1217 01:08:21.069730 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 
I1217 01:08:21.069863 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Topen tag 0 fid 1 mode 0
I1217 01:08:21.069915 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Ropen tag 0 qid (c9d632 29d9d1a0 'd') iounit 0
I1217 01:08:21.070051 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 0
I1217 01:08:21.070090 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d632 29d9d1a0 'd') m d775 at 0 mt 1765933699 l 4096 t 0 d 0 ext )
I1217 01:08:21.070265 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 0 count 262120
I1217 01:08:21.070404 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 258
I1217 01:08:21.070562 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 258 count 261862
I1217 01:08:21.070593 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.070724 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:08:21.070753 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.070886 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 01:08:21.070920 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 (c9d634 29d9d1a0 '') 
I1217 01:08:21.071047 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.071082 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d634 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.071195 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.071225 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d634 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.071340 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 2
I1217 01:08:21.071364 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.071485 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 2 0:'test-1765933699495920901' 
I1217 01:08:21.071518 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 (c9d636 29d9d1a0 '') 
I1217 01:08:21.071633 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.071662 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('test-1765933699495920901' 'jenkins' 'jenkins' '' q (c9d636 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.071771 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.071801 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('test-1765933699495920901' 'jenkins' 'jenkins' '' q (c9d636 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.071913 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 2
I1217 01:08:21.071938 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.072058 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 01:08:21.072097 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 (c9d635 29d9d1a0 '') 
I1217 01:08:21.072213 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.072247 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d635 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.072362 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.072392 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d635 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.072516 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 2
I1217 01:08:21.072541 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.072660 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:08:21.072692 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.072835 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 1
I1217 01:08:21.072870 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.328526 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 1 0:'test-1765933699495920901' 
I1217 01:08:21.328598 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 (c9d636 29d9d1a0 '') 
I1217 01:08:21.328778 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 1
I1217 01:08:21.328826 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('test-1765933699495920901' 'jenkins' 'jenkins' '' q (c9d636 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.328985 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 1 newfid 2 
I1217 01:08:21.329019 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 
I1217 01:08:21.329148 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Topen tag 0 fid 2 mode 0
I1217 01:08:21.329201 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Ropen tag 0 qid (c9d636 29d9d1a0 '') iounit 0
I1217 01:08:21.329372 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 1
I1217 01:08:21.329435 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('test-1765933699495920901' 'jenkins' 'jenkins' '' q (c9d636 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.329610 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 2 offset 0 count 262120
I1217 01:08:21.329666 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 24
I1217 01:08:21.329805 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 2 offset 24 count 262120
I1217 01:08:21.329838 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.329980 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 2 offset 24 count 262120
I1217 01:08:21.330026 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.330197 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 2
I1217 01:08:21.330233 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.330398 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 1
I1217 01:08:21.330424 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.671644 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 0
I1217 01:08:21.671727 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d632 29d9d1a0 'd') m d775 at 0 mt 1765933699 l 4096 t 0 d 0 ext )
I1217 01:08:21.672080 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 1 
I1217 01:08:21.672116 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 
I1217 01:08:21.672256 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Topen tag 0 fid 1 mode 0
I1217 01:08:21.672309 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Ropen tag 0 qid (c9d632 29d9d1a0 'd') iounit 0
I1217 01:08:21.672448 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 0
I1217 01:08:21.672488 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (c9d632 29d9d1a0 'd') m d775 at 0 mt 1765933699 l 4096 t 0 d 0 ext )
I1217 01:08:21.672648 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 0 count 262120
I1217 01:08:21.672760 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 258
I1217 01:08:21.672885 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 258 count 261862
I1217 01:08:21.672941 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.673092 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:08:21.673165 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.673298 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 01:08:21.673338 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 (c9d634 29d9d1a0 '') 
I1217 01:08:21.673459 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.673497 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d634 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.673635 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.673669 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (c9d634 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.673801 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 2
I1217 01:08:21.673827 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.673971 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 2 0:'test-1765933699495920901' 
I1217 01:08:21.674009 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 (c9d636 29d9d1a0 '') 
I1217 01:08:21.674128 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.674167 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('test-1765933699495920901' 'jenkins' 'jenkins' '' q (c9d636 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.674301 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.674337 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('test-1765933699495920901' 'jenkins' 'jenkins' '' q (c9d636 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.674467 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 2
I1217 01:08:21.674496 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.674641 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 01:08:21.674673 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rwalk tag 0 (c9d635 29d9d1a0 '') 
I1217 01:08:21.674785 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.674819 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d635 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.674956 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tstat tag 0 fid 2
I1217 01:08:21.674996 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (c9d635 29d9d1a0 '') m 644 at 0 mt 1765933699 l 24 t 0 d 0 ext )
I1217 01:08:21.675114 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 2
I1217 01:08:21.675143 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.675256 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tread tag 0 fid 1 offset 258 count 262120
I1217 01:08:21.675287 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rread tag 0 count 0
I1217 01:08:21.675429 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 1
I1217 01:08:21.675464 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.676662 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1217 01:08:21.676737 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rerror tag 0 ename 'file not found' ecode 0
I1217 01:08:21.937600 1191659 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35402 Tclunk tag 0 fid 0
I1217 01:08:21.937674 1191659 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35402 Rclunk tag 0
I1217 01:08:21.938778 1191659 main.go:127] stdlog: ufs.go:147 disconnected
I1217 01:08:21.960852 1191659 out.go:179] * Unmounting /mount-9p ...
I1217 01:08:21.963644 1191659 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 01:08:21.970604 1191659 mount.go:180] unmount for /mount-9p ran successfully
I1217 01:08:21.970723 1191659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/.mount-process: {Name:mk25bcdb9f0a5d42c1eafaecef4361fa079b1413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 01:08:21.973774 1191659 out.go:203] 
W1217 01:08:21.976635 1191659 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1217 01:08:21.979493 1191659 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-389537 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-389537 create deployment hello-node --image kicbase/echo-server: exit status 1 (54.897017ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-389537 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 service list: exit status 103 (257.134649ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389537 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389537"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-389537 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-389537 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389537\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 service list -o json: exit status 103 (259.4416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389537 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389537"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-389537 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 service --namespace=default --https --url hello-node: exit status 103 (265.874052ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389537 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389537"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-389537 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 service hello-node --url --format={{.IP}}: exit status 103 (267.118012ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389537 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389537"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-389537 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-389537 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389537\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 service hello-node --url: exit status 103 (251.5535ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-389537 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-389537"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-389537 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-389537 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-389537"
functional_test.go:1579: failed to parse "* The control-plane node functional-389537 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389537\"": parse "* The control-plane node functional-389537 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-389537\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (464.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 node start m02 --alsologtostderr -v 5
E1217 01:18:07.479270 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:18:10.981230 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:18:35.182376 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:20:07.912097 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:21:45.354037 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:23:07.479358 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:25:07.911533 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-202151 node start m02 --alsologtostderr -v 5: exit status 80 (7m40.691139907s)

                                                
                                                
-- stdout --
	* Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:17:31.773448 1212887 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:17:31.774907 1212887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:31.774955 1212887 out.go:374] Setting ErrFile to fd 2...
	I1217 01:17:31.774981 1212887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:31.775275 1212887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:17:31.775633 1212887 mustload.go:66] Loading cluster: ha-202151
	I1217 01:17:31.776111 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:17:31.776779 1212887 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	W1217 01:17:31.799302 1212887 host.go:58] "ha-202151-m02" host status: Stopped
	I1217 01:17:31.802503 1212887 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	I1217 01:17:31.805560 1212887 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:17:31.808528 1212887 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:17:31.811362 1212887 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:17:31.811412 1212887 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 01:17:31.811434 1212887 cache.go:65] Caching tarball of preloaded images
	I1217 01:17:31.811436 1212887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:17:31.811537 1212887 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:17:31.811552 1212887 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:17:31.811700 1212887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:17:31.831628 1212887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:17:31.831650 1212887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:17:31.831666 1212887 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:17:31.831689 1212887 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:17:31.831755 1212887 start.go:364] duration metric: took 37.743µs to acquireMachinesLock for "ha-202151-m02"
	I1217 01:17:31.831780 1212887 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:17:31.831791 1212887 fix.go:54] fixHost starting: m02
	I1217 01:17:31.832053 1212887 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:17:31.849530 1212887 fix.go:112] recreateIfNeeded on ha-202151-m02: state=Stopped err=<nil>
	W1217 01:17:31.849566 1212887 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:17:31.852792 1212887 out.go:252] * Restarting existing docker container for "ha-202151-m02" ...
	I1217 01:17:31.852880 1212887 cli_runner.go:164] Run: docker start ha-202151-m02
	I1217 01:17:32.152730 1212887 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:17:32.177802 1212887 kic.go:430] container "ha-202151-m02" state is running.
	I1217 01:17:32.178235 1212887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:17:32.211015 1212887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:17:32.211266 1212887 machine.go:94] provisionDockerMachine start ...
	I1217 01:17:32.211328 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:32.233771 1212887 main.go:143] libmachine: Using SSH client type: native
	I1217 01:17:32.234125 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
	I1217 01:17:32.234140 1212887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:17:32.234829 1212887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:17:35.428529 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:17:35.428559 1212887 ubuntu.go:182] provisioning hostname "ha-202151-m02"
	I1217 01:17:35.428653 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:35.461510 1212887 main.go:143] libmachine: Using SSH client type: native
	I1217 01:17:35.461830 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
	I1217 01:17:35.461848 1212887 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
	I1217 01:17:35.667563 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:17:35.667695 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:35.698066 1212887 main.go:143] libmachine: Using SSH client type: native
	I1217 01:17:35.698389 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
	I1217 01:17:35.698411 1212887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:17:35.900605 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:17:35.900698 1212887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:17:35.900752 1212887 ubuntu.go:190] setting up certificates
	I1217 01:17:35.900791 1212887 provision.go:84] configureAuth start
	I1217 01:17:35.900865 1212887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:17:35.928026 1212887 provision.go:143] copyHostCerts
	I1217 01:17:35.928069 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:17:35.928112 1212887 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:17:35.928124 1212887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:17:35.928207 1212887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:17:35.928297 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:17:35.928313 1212887 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:17:35.928317 1212887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:17:35.928345 1212887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:17:35.928391 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:17:35.928409 1212887 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:17:35.928495 1212887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:17:35.928536 1212887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:17:35.928599 1212887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
	I1217 01:17:36.007980 1212887 provision.go:177] copyRemoteCerts
	I1217 01:17:36.008124 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:17:36.008197 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:36.028400 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:17:36.137875 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:17:36.137964 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:17:36.205468 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:17:36.205562 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:17:36.245746 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:17:36.245840 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:17:36.273856 1212887 provision.go:87] duration metric: took 373.037787ms to configureAuth
	I1217 01:17:36.273887 1212887 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:17:36.274201 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:17:36.274373 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:36.295248 1212887 main.go:143] libmachine: Using SSH client type: native
	I1217 01:17:36.295601 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
	I1217 01:17:36.295644 1212887 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:17:37.725299 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:17:37.725320 1212887 machine.go:97] duration metric: took 5.514045416s to provisionDockerMachine
	I1217 01:17:37.725332 1212887 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
	I1217 01:17:37.725342 1212887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:17:37.725428 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:17:37.725475 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:37.745182 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:17:37.856677 1212887 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:17:37.860336 1212887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:17:37.860405 1212887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:17:37.860488 1212887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:17:37.860565 1212887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:17:37.860654 1212887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:17:37.860665 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:17:37.860763 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:17:37.869015 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:17:37.893523 1212887 start.go:296] duration metric: took 168.176295ms for postStartSetup
	I1217 01:17:37.893646 1212887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:17:37.893702 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:37.911121 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:17:38.011002 1212887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:17:38.016919 1212887 fix.go:56] duration metric: took 6.18512497s for fixHost
	I1217 01:17:38.016955 1212887 start.go:83] releasing machines lock for "ha-202151-m02", held for 6.185186482s
	I1217 01:17:38.017084 1212887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:17:38.037169 1212887 ssh_runner.go:195] Run: systemctl --version
	I1217 01:17:38.037225 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:38.037498 1212887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:17:38.037560 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:17:38.058405 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:17:38.065398 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:17:38.162649 1212887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:17:38.349718 1212887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:17:38.354869 1212887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:17:38.354970 1212887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:17:38.370751 1212887 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:17:38.370793 1212887 start.go:496] detecting cgroup driver to use...
	I1217 01:17:38.370841 1212887 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:17:38.370925 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:17:38.400471 1212887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:17:38.439074 1212887 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:17:38.439181 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:17:38.466695 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:17:38.485622 1212887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:17:38.735059 1212887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:17:38.990913 1212887 docker.go:234] disabling docker service ...
	I1217 01:17:38.991008 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:17:39.009403 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:17:39.028082 1212887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:17:39.269841 1212887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:17:39.519802 1212887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:17:39.538453 1212887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:17:39.557763 1212887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:17:39.557864 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:17:39.575378 1212887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:17:39.575480 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:17:39.593802 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:17:39.613034 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:17:39.623979 1212887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:17:39.633291 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:17:39.642960 1212887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:17:39.652257 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:17:39.661456 1212887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:17:39.672228 1212887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:17:39.681881 1212887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:17:39.930603 1212887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:19:10.214269 1212887 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.283629403s)
	I1217 01:19:10.214296 1212887 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:19:10.214350 1212887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:19:10.218737 1212887 start.go:564] Will wait 60s for crictl version
	I1217 01:19:10.218819 1212887 ssh_runner.go:195] Run: which crictl
	I1217 01:19:10.224256 1212887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:19:10.257852 1212887 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:19:10.257940 1212887 ssh_runner.go:195] Run: crio --version
	I1217 01:19:10.292833 1212887 ssh_runner.go:195] Run: crio --version
	I1217 01:19:10.338959 1212887 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:19:10.341947 1212887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:19:10.406986 1212887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-17 01:19:10.396295032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:19:10.407138 1212887 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:19:10.425083 1212887 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:19:10.429681 1212887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:19:10.439896 1212887 mustload.go:66] Loading cluster: ha-202151
	I1217 01:19:10.440155 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:19:10.440475 1212887 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:19:10.459363 1212887 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:19:10.459947 1212887 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
	I1217 01:19:10.459969 1212887 certs.go:195] generating shared ca certs ...
	I1217 01:19:10.459984 1212887 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:19:10.460161 1212887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:19:10.460227 1212887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:19:10.460241 1212887 certs.go:257] generating profile certs ...
	I1217 01:19:10.460350 1212887 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:19:10.460388 1212887 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005
	I1217 01:19:10.460412 1212887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1217 01:19:10.792582 1212887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005 ...
	I1217 01:19:10.792655 1212887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005: {Name:mk27b79a4057c5fa5a631faac81eed9dd28fca1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:19:10.792888 1212887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005 ...
	I1217 01:19:10.792926 1212887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005: {Name:mkf0bae5836e8aa6a0bf5bdb70302c8b20e44bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:19:10.793070 1212887 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:19:10.793251 1212887 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:19:10.793449 1212887 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:19:10.793486 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:19:10.793525 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:19:10.793569 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:19:10.793603 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:19:10.793641 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:19:10.793684 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:19:10.793718 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:19:10.793747 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:19:10.793848 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:19:10.793907 1212887 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:19:10.793934 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:19:10.793996 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:19:10.794051 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:19:10.794120 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:19:10.794242 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:19:10.794307 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:19:10.794370 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:19:10.794406 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:19:10.794512 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:19:10.812911 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:19:10.908700 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:19:10.912994 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:19:10.921671 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:19:10.925673 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:19:10.935058 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:19:10.939370 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:19:10.948039 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:19:10.951913 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:19:10.961358 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:19:10.965410 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:19:10.974230 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:19:10.978686 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:19:10.989422 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:19:11.014844 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:19:11.038618 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:19:11.057915 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:19:11.080320 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1217 01:19:11.104192 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:19:11.127370 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:19:11.147552 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:19:11.171415 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:19:11.191434 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:19:11.210757 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:19:11.233657 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:19:11.248939 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:19:11.266776 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:19:11.286251 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:19:11.307570 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:19:11.322688 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:19:11.344469 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:19:11.361059 1212887 ssh_runner.go:195] Run: openssl version
	I1217 01:19:11.369334 1212887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:19:11.381193 1212887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:19:11.390480 1212887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:19:11.395072 1212887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:19:11.395136 1212887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:19:11.440884 1212887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:19:11.448580 1212887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:19:11.456312 1212887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:19:11.464516 1212887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:19:11.468412 1212887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:19:11.468568 1212887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:19:11.510254 1212887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:19:11.518207 1212887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:19:11.526985 1212887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:19:11.535383 1212887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:19:11.539615 1212887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:19:11.539708 1212887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:19:11.584802 1212887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:19:11.593469 1212887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:19:11.597684 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:19:11.640772 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:19:11.692200 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:19:11.738648 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:19:11.792432 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:19:11.839984 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:19:11.892518 1212887 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1217 01:19:11.892650 1212887 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:19:11.892736 1212887 kube-vip.go:115] generating kube-vip config ...
	I1217 01:19:11.892799 1212887 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:19:11.905420 1212887 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:19:11.905590 1212887 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:19:11.905677 1212887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:19:11.914916 1212887 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:19:11.915005 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:19:11.925064 1212887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:19:11.938790 1212887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:19:11.955984 1212887 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:19:11.974001 1212887 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:19:11.979090 1212887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:19:11.990352 1212887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:19:12.127606 1212887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:19:12.141579 1212887 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:19:12.141795 1212887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:19:12.142161 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:19:12.145896 1212887 out.go:179] * Enabled addons: 
	I1217 01:19:12.145982 1212887 out.go:179] * Verifying Kubernetes components...
	I1217 01:19:12.149008 1212887 addons.go:530] duration metric: took 7.212432ms for enable addons: enabled=[]
	I1217 01:19:12.149136 1212887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:19:12.355157 1212887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:19:12.373735 1212887 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:19:12.373844 1212887 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:19:12.374285 1212887 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:19:12.374307 1212887 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:19:12.374314 1212887 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:19:12.374319 1212887 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:19:12.374331 1212887 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:19:12.374579 1212887 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:19:12.374919 1212887 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	W1217 01:19:14.383901 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:16.878864 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:18.881531 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:21.380522 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:23.878694 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:26.378088 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:28.379263 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:30.880938 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:33.378988 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:35.878142 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:38.378801 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:40.878072 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:42.883322 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:45.378503 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:47.379595 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:49.878680 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:52.378691 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:54.379397 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:56.878667 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:19:58.880031 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:01.379177 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:03.380616 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:05.383534 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:07.878614 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:10.378226 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:12.378901 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:14.883092 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:17.378458 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:19.379461 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:21.379840 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:23.878253 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:25.879107 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:28.378748 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:30.881079 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:33.380153 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:35.878734 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:38.378907 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:40.878637 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:42.881167 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:45.380620 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:47.880285 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:50.379994 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:52.878276 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:54.881762 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:57.378621 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:20:59.381116 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:01.880065 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:04.378559 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:06.881371 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:09.379457 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:11.878871 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:14.378760 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:16.379002 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:18.882096 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:21.379502 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:23.380250 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:25.878204 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:27.879154 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:30.378813 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:32.878859 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:34.880901 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:37.378895 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:39.380096 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:41.879356 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:44.378646 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:46.883484 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:49.381038 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:51.877680 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:53.878219 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:56.379566 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:21:58.883060 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:01.378292 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:03.378652 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:05.381437 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:07.878907 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:10.378719 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:12.878659 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:14.881742 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:17.384700 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:19.879048 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:22.379295 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:24.878086 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:26.881541 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:28.881841 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:31.378873 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:33.380289 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:35.878743 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:38.378925 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:40.379292 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:42.882568 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:45.380508 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:47.877897 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:49.878463 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:51.878932 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:54.378241 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:56.378867 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:22:58.882258 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:01.378072 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:03.379966 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:05.877880 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:07.878915 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:10.380643 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:12.880251 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:14.880950 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:17.378556 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:19.380892 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:21.878776 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:23.878970 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:26.379365 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:28.879304 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:30.880366 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:33.378897 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:35.380082 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:37.877982 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:39.878038 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:41.879212 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:44.378666 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:46.883456 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:49.378016 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:51.380469 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:53.878407 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:55.878900 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:23:58.378739 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:00.381092 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:02.880175 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:05.380915 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:07.878183 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:09.878668 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:11.880491 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:13.880622 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:16.378757 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:18.882441 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:21.378542 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:23.379475 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:25.878177 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:27.878797 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:30.378015 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:32.378948 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:34.881395 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:37.379492 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:39.380336 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:41.878820 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:44.378968 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:46.880240 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:49.379367 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:51.878214 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:53.878308 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:56.377960 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:24:58.378761 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:25:00.387978 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:25:02.882070 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:25:05.378407 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:25:07.878175 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:25:09.878537 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
	W1217 01:25:12.375733 1212887 node_ready.go:55] error getting node "ha-202151-m02" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1217 01:25:12.375767 1212887 node_ready.go:38] duration metric: took 6m0.001169961s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:25:12.379058 1212887 out.go:203] 
	W1217 01:25:12.381988 1212887 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 01:25:12.382015 1212887 out.go:285] * 
	* 
	W1217 01:25:12.390852 1212887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:25:12.393937 1212887 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1217 01:17:31.773448 1212887 out.go:360] Setting OutFile to fd 1 ...
I1217 01:17:31.774907 1212887 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:17:31.774955 1212887 out.go:374] Setting ErrFile to fd 2...
I1217 01:17:31.774981 1212887 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:17:31.775275 1212887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:17:31.775633 1212887 mustload.go:66] Loading cluster: ha-202151
I1217 01:17:31.776111 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 01:17:31.776779 1212887 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
W1217 01:17:31.799302 1212887 host.go:58] "ha-202151-m02" host status: Stopped
I1217 01:17:31.802503 1212887 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
I1217 01:17:31.805560 1212887 cache.go:134] Beginning downloading kic base image for docker with crio
I1217 01:17:31.808528 1212887 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
I1217 01:17:31.811362 1212887 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1217 01:17:31.811412 1212887 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
I1217 01:17:31.811434 1212887 cache.go:65] Caching tarball of preloaded images
I1217 01:17:31.811436 1212887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
I1217 01:17:31.811537 1212887 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
I1217 01:17:31.811552 1212887 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1217 01:17:31.811700 1212887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
I1217 01:17:31.831628 1212887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
I1217 01:17:31.831650 1212887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
I1217 01:17:31.831666 1212887 cache.go:243] Successfully downloaded all kic artifacts
I1217 01:17:31.831689 1212887 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 01:17:31.831755 1212887 start.go:364] duration metric: took 37.743µs to acquireMachinesLock for "ha-202151-m02"
I1217 01:17:31.831780 1212887 start.go:96] Skipping create...Using existing machine configuration
I1217 01:17:31.831791 1212887 fix.go:54] fixHost starting: m02
I1217 01:17:31.832053 1212887 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
I1217 01:17:31.849530 1212887 fix.go:112] recreateIfNeeded on ha-202151-m02: state=Stopped err=<nil>
W1217 01:17:31.849566 1212887 fix.go:138] unexpected machine state, will restart: <nil>
I1217 01:17:31.852792 1212887 out.go:252] * Restarting existing docker container for "ha-202151-m02" ...
I1217 01:17:31.852880 1212887 cli_runner.go:164] Run: docker start ha-202151-m02
I1217 01:17:32.152730 1212887 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
I1217 01:17:32.177802 1212887 kic.go:430] container "ha-202151-m02" state is running.
I1217 01:17:32.178235 1212887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
I1217 01:17:32.211015 1212887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
I1217 01:17:32.211266 1212887 machine.go:94] provisionDockerMachine start ...
I1217 01:17:32.211328 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:32.233771 1212887 main.go:143] libmachine: Using SSH client type: native
I1217 01:17:32.234125 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
I1217 01:17:32.234140 1212887 main.go:143] libmachine: About to run SSH command:
hostname
I1217 01:17:32.234829 1212887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1217 01:17:35.428529 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02

                                                
                                                
I1217 01:17:35.428559 1212887 ubuntu.go:182] provisioning hostname "ha-202151-m02"
I1217 01:17:35.428653 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:35.461510 1212887 main.go:143] libmachine: Using SSH client type: native
I1217 01:17:35.461830 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
I1217 01:17:35.461848 1212887 main.go:143] libmachine: About to run SSH command:
sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
I1217 01:17:35.667563 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02

                                                
                                                
I1217 01:17:35.667695 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:35.698066 1212887 main.go:143] libmachine: Using SSH client type: native
I1217 01:17:35.698389 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
I1217 01:17:35.698411 1212887 main.go:143] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I1217 01:17:35.900605 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
I1217 01:17:35.900698 1212887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
I1217 01:17:35.900752 1212887 ubuntu.go:190] setting up certificates
I1217 01:17:35.900791 1212887 provision.go:84] configureAuth start
I1217 01:17:35.900865 1212887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
I1217 01:17:35.928026 1212887 provision.go:143] copyHostCerts
I1217 01:17:35.928069 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
I1217 01:17:35.928112 1212887 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
I1217 01:17:35.928124 1212887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
I1217 01:17:35.928207 1212887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
I1217 01:17:35.928297 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
I1217 01:17:35.928313 1212887 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
I1217 01:17:35.928317 1212887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
I1217 01:17:35.928345 1212887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
I1217 01:17:35.928391 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
I1217 01:17:35.928409 1212887 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
I1217 01:17:35.928495 1212887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
I1217 01:17:35.928536 1212887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
I1217 01:17:35.928599 1212887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
I1217 01:17:36.007980 1212887 provision.go:177] copyRemoteCerts
I1217 01:17:36.008124 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 01:17:36.008197 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:36.028400 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
I1217 01:17:36.137875 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1217 01:17:36.137964 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1217 01:17:36.205468 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
I1217 01:17:36.205562 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1217 01:17:36.245746 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1217 01:17:36.245840 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1217 01:17:36.273856 1212887 provision.go:87] duration metric: took 373.037787ms to configureAuth
I1217 01:17:36.273887 1212887 ubuntu.go:206] setting minikube options for container-runtime
I1217 01:17:36.274201 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 01:17:36.274373 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:36.295248 1212887 main.go:143] libmachine: Using SSH client type: native
I1217 01:17:36.295601 1212887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33933 <nil> <nil>}
I1217 01:17:36.295644 1212887 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1217 01:17:37.725299 1212887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

                                                
                                                
I1217 01:17:37.725320 1212887 machine.go:97] duration metric: took 5.514045416s to provisionDockerMachine
I1217 01:17:37.725332 1212887 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
I1217 01:17:37.725342 1212887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 01:17:37.725428 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 01:17:37.725475 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:37.745182 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
I1217 01:17:37.856677 1212887 ssh_runner.go:195] Run: cat /etc/os-release
I1217 01:17:37.860336 1212887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1217 01:17:37.860405 1212887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1217 01:17:37.860488 1212887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
I1217 01:17:37.860565 1212887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
I1217 01:17:37.860654 1212887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
I1217 01:17:37.860665 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
I1217 01:17:37.860763 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1217 01:17:37.869015 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
I1217 01:17:37.893523 1212887 start.go:296] duration metric: took 168.176295ms for postStartSetup
I1217 01:17:37.893646 1212887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1217 01:17:37.893702 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:37.911121 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
I1217 01:17:38.011002 1212887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1217 01:17:38.016919 1212887 fix.go:56] duration metric: took 6.18512497s for fixHost
I1217 01:17:38.016955 1212887 start.go:83] releasing machines lock for "ha-202151-m02", held for 6.185186482s
I1217 01:17:38.017084 1212887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
I1217 01:17:38.037169 1212887 ssh_runner.go:195] Run: systemctl --version
I1217 01:17:38.037225 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:38.037498 1212887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 01:17:38.037560 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
I1217 01:17:38.058405 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
I1217 01:17:38.065398 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
I1217 01:17:38.162649 1212887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1217 01:17:38.349718 1212887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 01:17:38.354869 1212887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 01:17:38.354970 1212887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 01:17:38.370751 1212887 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1217 01:17:38.370793 1212887 start.go:496] detecting cgroup driver to use...
I1217 01:17:38.370841 1212887 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1217 01:17:38.370925 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1217 01:17:38.400471 1212887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1217 01:17:38.439074 1212887 docker.go:218] disabling cri-docker service (if available) ...
I1217 01:17:38.439181 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1217 01:17:38.466695 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1217 01:17:38.485622 1212887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1217 01:17:38.735059 1212887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1217 01:17:38.990913 1212887 docker.go:234] disabling docker service ...
I1217 01:17:38.991008 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1217 01:17:39.009403 1212887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1217 01:17:39.028082 1212887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1217 01:17:39.269841 1212887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1217 01:17:39.519802 1212887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 01:17:39.538453 1212887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1217 01:17:39.557763 1212887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1217 01:17:39.557864 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 01:17:39.575378 1212887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1217 01:17:39.575480 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 01:17:39.593802 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 01:17:39.613034 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1217 01:17:39.623979 1212887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 01:17:39.633291 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 01:17:39.642960 1212887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1217 01:17:39.652257 1212887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 01:17:39.661456 1212887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 01:17:39.672228 1212887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 01:17:39.681881 1212887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 01:17:39.930603 1212887 ssh_runner.go:195] Run: sudo systemctl restart crio
I1217 01:19:10.214269 1212887 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.283629403s)
I1217 01:19:10.214296 1212887 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1217 01:19:10.214350 1212887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1217 01:19:10.218737 1212887 start.go:564] Will wait 60s for crictl version
I1217 01:19:10.218819 1212887 ssh_runner.go:195] Run: which crictl
I1217 01:19:10.224256 1212887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1217 01:19:10.257852 1212887 start.go:580] Version:  0.1.0
RuntimeName:  cri-o
RuntimeVersion:  1.34.3
RuntimeApiVersion:  v1
I1217 01:19:10.257940 1212887 ssh_runner.go:195] Run: crio --version
I1217 01:19:10.292833 1212887 ssh_runner.go:195] Run: crio --version
I1217 01:19:10.338959 1212887 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
I1217 01:19:10.341947 1212887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 01:19:10.406986 1212887 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-17 01:19:10.396295032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 01:19:10.407138 1212887 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 01:19:10.425083 1212887 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
I1217 01:19:10.429681 1212887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 01:19:10.439896 1212887 mustload.go:66] Loading cluster: ha-202151
I1217 01:19:10.440155 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 01:19:10.440475 1212887 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
I1217 01:19:10.459363 1212887 host.go:66] Checking if "ha-202151" exists ...
I1217 01:19:10.459947 1212887 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
I1217 01:19:10.459969 1212887 certs.go:195] generating shared ca certs ...
I1217 01:19:10.459984 1212887 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 01:19:10.460161 1212887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
I1217 01:19:10.460227 1212887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
I1217 01:19:10.460241 1212887 certs.go:257] generating profile certs ...
I1217 01:19:10.460350 1212887 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
I1217 01:19:10.460388 1212887 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005
I1217 01:19:10.460412 1212887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
I1217 01:19:10.792582 1212887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005 ...
I1217 01:19:10.792655 1212887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005: {Name:mk27b79a4057c5fa5a631faac81eed9dd28fca1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 01:19:10.792888 1212887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005 ...
I1217 01:19:10.792926 1212887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005: {Name:mkf0bae5836e8aa6a0bf5bdb70302c8b20e44bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 01:19:10.793070 1212887 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.4c617005 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
I1217 01:19:10.793251 1212887 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.4c617005 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
I1217 01:19:10.793449 1212887 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
I1217 01:19:10.793486 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1217 01:19:10.793525 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1217 01:19:10.793569 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1217 01:19:10.793603 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1217 01:19:10.793641 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1217 01:19:10.793684 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1217 01:19:10.793718 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1217 01:19:10.793747 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1217 01:19:10.793848 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
W1217 01:19:10.793907 1212887 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
I1217 01:19:10.793934 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
I1217 01:19:10.793996 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
I1217 01:19:10.794051 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
I1217 01:19:10.794120 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
I1217 01:19:10.794242 1212887 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
I1217 01:19:10.794307 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1217 01:19:10.794370 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
I1217 01:19:10.794406 1212887 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
I1217 01:19:10.794512 1212887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
I1217 01:19:10.812911 1212887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
I1217 01:19:10.908700 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I1217 01:19:10.912994 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I1217 01:19:10.921671 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I1217 01:19:10.925673 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I1217 01:19:10.935058 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I1217 01:19:10.939370 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I1217 01:19:10.948039 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I1217 01:19:10.951913 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
I1217 01:19:10.961358 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I1217 01:19:10.965410 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I1217 01:19:10.974230 1212887 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I1217 01:19:10.978686 1212887 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
I1217 01:19:10.989422 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 01:19:11.014844 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1217 01:19:11.038618 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 01:19:11.057915 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1217 01:19:11.080320 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I1217 01:19:11.104192 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1217 01:19:11.127370 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 01:19:11.147552 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1217 01:19:11.171415 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 01:19:11.191434 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
I1217 01:19:11.210757 1212887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
I1217 01:19:11.233657 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I1217 01:19:11.248939 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I1217 01:19:11.266776 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I1217 01:19:11.286251 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
I1217 01:19:11.307570 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I1217 01:19:11.322688 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
I1217 01:19:11.344469 1212887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I1217 01:19:11.361059 1212887 ssh_runner.go:195] Run: openssl version
I1217 01:19:11.369334 1212887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 01:19:11.381193 1212887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 01:19:11.390480 1212887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 01:19:11.395072 1212887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
I1217 01:19:11.395136 1212887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 01:19:11.440884 1212887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 01:19:11.448580 1212887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
I1217 01:19:11.456312 1212887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
I1217 01:19:11.464516 1212887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
I1217 01:19:11.468412 1212887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
I1217 01:19:11.468568 1212887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
I1217 01:19:11.510254 1212887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1217 01:19:11.518207 1212887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
I1217 01:19:11.526985 1212887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
I1217 01:19:11.535383 1212887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
I1217 01:19:11.539615 1212887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
I1217 01:19:11.539708 1212887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
I1217 01:19:11.584802 1212887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1217 01:19:11.593469 1212887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 01:19:11.597684 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1217 01:19:11.640772 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1217 01:19:11.692200 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1217 01:19:11.738648 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1217 01:19:11.792432 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1217 01:19:11.839984 1212887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1217 01:19:11.892518 1212887 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
I1217 01:19:11.892650 1212887 kubeadm.go:947] kubelet [Unit]
Wants=crio.service

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 01:19:11.892736 1212887 kube-vip.go:115] generating kube-vip config ...
I1217 01:19:11.892799 1212887 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
I1217 01:19:11.905420 1212887 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 01:19:11.905590 1212887 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.49.254
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v1.0.2
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I1217 01:19:11.905677 1212887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1217 01:19:11.914916 1212887 binaries.go:51] Found k8s binaries, skipping transfer
I1217 01:19:11.915005 1212887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I1217 01:19:11.925064 1212887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1217 01:19:11.938790 1212887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1217 01:19:11.955984 1212887 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
I1217 01:19:11.974001 1212887 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
I1217 01:19:11.979090 1212887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 01:19:11.990352 1212887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 01:19:12.127606 1212887 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 01:19:12.141579 1212887 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1217 01:19:12.141795 1212887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1217 01:19:12.142161 1212887 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 01:19:12.145896 1212887 out.go:179] * Enabled addons: 
I1217 01:19:12.145982 1212887 out.go:179] * Verifying Kubernetes components...
I1217 01:19:12.149008 1212887 addons.go:530] duration metric: took 7.212432ms for enable addons: enabled=[]
I1217 01:19:12.149136 1212887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 01:19:12.355157 1212887 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 01:19:12.373735 1212887 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W1217 01:19:12.373844 1212887 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
I1217 01:19:12.374285 1212887 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1217 01:19:12.374307 1212887 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1217 01:19:12.374314 1212887 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1217 01:19:12.374319 1212887 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1217 01:19:12.374331 1212887 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1217 01:19:12.374579 1212887 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
I1217 01:19:12.374919 1212887 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
W1217 01:19:14.383901 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:16.878864 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:18.881531 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:21.380522 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:23.878694 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:26.378088 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:28.379263 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:30.880938 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:33.378988 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:35.878142 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:38.378801 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:40.878072 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:42.883322 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:45.378503 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:47.379595 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:49.878680 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:52.378691 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:54.379397 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:56.878667 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:19:58.880031 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:01.379177 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:03.380616 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:05.383534 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:07.878614 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:10.378226 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:12.378901 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:14.883092 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:17.378458 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:19.379461 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:21.379840 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:23.878253 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:25.879107 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:28.378748 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:30.881079 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:33.380153 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:35.878734 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:38.378907 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:40.878637 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:42.881167 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:45.380620 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:47.880285 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:50.379994 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:52.878276 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:54.881762 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:57.378621 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:20:59.381116 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:01.880065 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:04.378559 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:06.881371 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:09.379457 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:11.878871 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:14.378760 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:16.379002 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:18.882096 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:21.379502 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:23.380250 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:25.878204 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:27.879154 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:30.378813 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:32.878859 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:34.880901 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:37.378895 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:39.380096 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:41.879356 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:44.378646 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:46.883484 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:49.381038 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:51.877680 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:53.878219 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:56.379566 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:21:58.883060 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:01.378292 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:03.378652 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:05.381437 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:07.878907 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:10.378719 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:12.878659 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:14.881742 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:17.384700 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:19.879048 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:22.379295 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:24.878086 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:26.881541 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:28.881841 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:31.378873 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:33.380289 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:35.878743 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:38.378925 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:40.379292 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:42.882568 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:45.380508 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:47.877897 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:49.878463 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:51.878932 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:54.378241 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:56.378867 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:22:58.882258 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:01.378072 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:03.379966 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:05.877880 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:07.878915 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:10.380643 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:12.880251 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:14.880950 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:17.378556 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:19.380892 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:21.878776 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:23.878970 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:26.379365 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:28.879304 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:30.880366 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:33.378897 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:35.380082 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:37.877982 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:39.878038 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:41.879212 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:44.378666 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:46.883456 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:49.378016 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:51.380469 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:53.878407 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:55.878900 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:23:58.378739 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:00.381092 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:02.880175 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:05.380915 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:07.878183 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:09.878668 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:11.880491 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:13.880622 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:16.378757 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:18.882441 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:21.378542 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:23.379475 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:25.878177 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:27.878797 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:30.378015 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:32.378948 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:34.881395 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:37.379492 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:39.380336 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:41.878820 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:44.378968 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:46.880240 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:49.379367 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:51.878214 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:53.878308 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:56.377960 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:24:58.378761 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:25:00.387978 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:25:02.882070 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:25:05.378407 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:25:07.878175 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:25:09.878537 1212887 node_ready.go:57] node "ha-202151-m02" has "Ready":"Unknown" status (will retry)
W1217 01:25:12.375733 1212887 node_ready.go:55] error getting node "ha-202151-m02" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
I1217 01:25:12.375767 1212887 node_ready.go:38] duration metric: took 6m0.001169961s for node "ha-202151-m02" to be "Ready" ...
I1217 01:25:12.379058 1212887 out.go:203] 
W1217 01:25:12.381988 1212887 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
W1217 01:25:12.382015 1212887 out.go:285] * 
* 
W1217 01:25:12.390852 1212887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1217 01:25:12.393937 1212887 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-arm64 -p ha-202151 node start m02 --alsologtostderr -v 5": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5: (1.026940779s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-202151
helpers_test.go:244: (dbg) docker inspect ha-202151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	        "Created": "2025-12-17T01:12:34.697109094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1198761,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:12:34.774489667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d-json.log",
	        "Name": "/ha-202151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-202151:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-202151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	                "LowerDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-202151",
	                "Source": "/var/lib/docker/volumes/ha-202151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-202151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-202151",
	                "name.minikube.sigs.k8s.io": "ha-202151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69c4024274eef182b877382c87f84b1066504b243199a1f701b7ba3c5988907d",
	            "SandboxKey": "/var/run/docker/netns/69c4024274ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33913"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33914"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33917"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33915"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33916"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-202151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:53:81:c1:ab:98",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e224ccab4890fdef242aee82a08ae93dfe44ddd1860f17db152892136a611dec",
	                    "EndpointID": "c365610c10e55ebc07513f8cc786a1fa4f0a1e269f7b80a930f9d3231c952292",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-202151",
	                        "0d1af93acb20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-202151 -n ha-202151
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 logs -n 25: (1.574611747s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m03_ha-202151.txt                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151 sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151.txt                                                 │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m03_ha-202151-m02.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m02 sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m02.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151-m04:/home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp testdata/cp-test.txt ha-202151-m04:/home/docker/cp-test.txt                                                             │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m04.txt │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m04_ha-202151.txt                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151.txt                                                 │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m02 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node start m02 --alsologtostderr -v 5                                                                                      │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:12:29
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:12:29.573244 1198371 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:12:29.573418 1198371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:12:29.573450 1198371 out.go:374] Setting ErrFile to fd 2...
	I1217 01:12:29.573474 1198371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:12:29.573731 1198371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:12:29.574155 1198371 out.go:368] Setting JSON to false
	I1217 01:12:29.575013 1198371 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24900,"bootTime":1765909050,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:12:29.575112 1198371 start.go:143] virtualization:  
	I1217 01:12:29.581362 1198371 out.go:179] * [ha-202151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:12:29.584949 1198371 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:12:29.585035 1198371 notify.go:221] Checking for updates...
	I1217 01:12:29.592047 1198371 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:12:29.595293 1198371 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:12:29.598483 1198371 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:12:29.601600 1198371 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:12:29.604621 1198371 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:12:29.607920 1198371 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:12:29.637918 1198371 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:12:29.638045 1198371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:12:29.694786 1198371 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:12:29.685378439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:12:29.694890 1198371 docker.go:319] overlay module found
	I1217 01:12:29.700142 1198371 out.go:179] * Using the docker driver based on user configuration
	I1217 01:12:29.703137 1198371 start.go:309] selected driver: docker
	I1217 01:12:29.703168 1198371 start.go:927] validating driver "docker" against <nil>
	I1217 01:12:29.703182 1198371 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:12:29.703965 1198371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:12:29.767689 1198371 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:12:29.758837594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:12:29.767851 1198371 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 01:12:29.768071 1198371 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:12:29.771109 1198371 out.go:179] * Using Docker driver with root privileges
	I1217 01:12:29.774120 1198371 cni.go:84] Creating CNI manager for ""
	I1217 01:12:29.774198 1198371 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1217 01:12:29.774215 1198371 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 01:12:29.774305 1198371 start.go:353] cluster config:
	{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1217 01:12:29.777448 1198371 out.go:179] * Starting "ha-202151" primary control-plane node in "ha-202151" cluster
	I1217 01:12:29.780475 1198371 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:12:29.783500 1198371 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:12:29.786632 1198371 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:12:29.786755 1198371 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 01:12:29.786766 1198371 cache.go:65] Caching tarball of preloaded images
	I1217 01:12:29.786883 1198371 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:12:29.786894 1198371 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:12:29.786938 1198371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:12:29.787249 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:12:29.787280 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json: {Name:mk7a4daa8517cc1e64c33a6aa9e92c2df93e509a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:29.806154 1198371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:12:29.806181 1198371 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:12:29.806197 1198371 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:12:29.806230 1198371 start.go:360] acquireMachinesLock for ha-202151: {Name:mk96d245790ddb7861f0cddd8ac09eba6d29a858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:12:29.806336 1198371 start.go:364] duration metric: took 85.454µs to acquireMachinesLock for "ha-202151"
	I1217 01:12:29.806368 1198371 start.go:93] Provisioning new machine with config: &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:12:29.806438 1198371 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:12:29.809875 1198371 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:12:29.810118 1198371 start.go:159] libmachine.API.Create for "ha-202151" (driver="docker")
	I1217 01:12:29.810158 1198371 client.go:173] LocalClient.Create starting
	I1217 01:12:29.810241 1198371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem
	I1217 01:12:29.810278 1198371 main.go:143] libmachine: Decoding PEM data...
	I1217 01:12:29.810301 1198371 main.go:143] libmachine: Parsing certificate...
	I1217 01:12:29.810362 1198371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem
	I1217 01:12:29.810389 1198371 main.go:143] libmachine: Decoding PEM data...
	I1217 01:12:29.810407 1198371 main.go:143] libmachine: Parsing certificate...
	I1217 01:12:29.810779 1198371 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:12:29.826895 1198371 cli_runner.go:211] docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:12:29.826987 1198371 network_create.go:284] running [docker network inspect ha-202151] to gather additional debugging logs...
	I1217 01:12:29.827014 1198371 cli_runner.go:164] Run: docker network inspect ha-202151
	W1217 01:12:29.843906 1198371 cli_runner.go:211] docker network inspect ha-202151 returned with exit code 1
	I1217 01:12:29.843938 1198371 network_create.go:287] error running [docker network inspect ha-202151]: docker network inspect ha-202151: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-202151 not found
	I1217 01:12:29.843960 1198371 network_create.go:289] output of [docker network inspect ha-202151]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-202151 not found
	
	** /stderr **
	I1217 01:12:29.844062 1198371 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:12:29.861598 1198371 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001913820}
	I1217 01:12:29.861651 1198371 network_create.go:124] attempt to create docker network ha-202151 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 01:12:29.861710 1198371 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-202151 ha-202151
	I1217 01:12:29.923119 1198371 network_create.go:108] docker network ha-202151 192.168.49.0/24 created
	I1217 01:12:29.923156 1198371 kic.go:121] calculated static IP "192.168.49.2" for the "ha-202151" container
	I1217 01:12:29.923250 1198371 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:12:29.938809 1198371 cli_runner.go:164] Run: docker volume create ha-202151 --label name.minikube.sigs.k8s.io=ha-202151 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:12:29.957425 1198371 oci.go:103] Successfully created a docker volume ha-202151
	I1217 01:12:29.957514 1198371 cli_runner.go:164] Run: docker run --rm --name ha-202151-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-202151 --entrypoint /usr/bin/test -v ha-202151:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:12:30.553499 1198371 oci.go:107] Successfully prepared a docker volume ha-202151
	I1217 01:12:30.553569 1198371 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:12:30.553578 1198371 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:12:30.553647 1198371 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-202151:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:12:34.622699 1198371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-202151:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.069009531s)
	I1217 01:12:34.622730 1198371 kic.go:203] duration metric: took 4.069148793s to extract preloaded images to volume ...
	W1217 01:12:34.622869 1198371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:12:34.622977 1198371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:12:34.682032 1198371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-202151 --name ha-202151 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-202151 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-202151 --network ha-202151 --ip 192.168.49.2 --volume ha-202151:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:12:35.001499 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Running}}
	I1217 01:12:35.031324 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:12:35.056044 1198371 cli_runner.go:164] Run: docker exec ha-202151 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:12:35.107861 1198371 oci.go:144] the created container "ha-202151" has a running status.
	I1217 01:12:35.107894 1198371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa...
	I1217 01:12:35.303052 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1217 01:12:35.303163 1198371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:12:35.324067 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:12:35.341962 1198371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:12:35.341983 1198371 kic_runner.go:114] Args: [docker exec --privileged ha-202151 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:12:35.390345 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:12:35.420570 1198371 machine.go:94] provisionDockerMachine start ...
	I1217 01:12:35.420673 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:35.448039 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:12:35.448375 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33913 <nil> <nil>}
	I1217 01:12:35.448384 1198371 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:12:35.449269 1198371 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58370->127.0.0.1:33913: read: connection reset by peer
	I1217 01:12:38.579943 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:12:38.579969 1198371 ubuntu.go:182] provisioning hostname "ha-202151"
	I1217 01:12:38.580032 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:38.598054 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:12:38.598378 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33913 <nil> <nil>}
	I1217 01:12:38.598395 1198371 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151 && echo "ha-202151" | sudo tee /etc/hostname
	I1217 01:12:38.737801 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:12:38.737902 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:38.756138 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:12:38.756492 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33913 <nil> <nil>}
	I1217 01:12:38.756515 1198371 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:12:38.888747 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:12:38.888817 1198371 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:12:38.888855 1198371 ubuntu.go:190] setting up certificates
	I1217 01:12:38.888898 1198371 provision.go:84] configureAuth start
	I1217 01:12:38.889023 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:12:38.906947 1198371 provision.go:143] copyHostCerts
	I1217 01:12:38.906996 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:12:38.907031 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:12:38.907039 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:12:38.907116 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:12:38.907234 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:12:38.907253 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:12:38.907258 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:12:38.907303 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:12:38.907351 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:12:38.907369 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:12:38.907373 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:12:38.907398 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:12:38.907441 1198371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151 san=[127.0.0.1 192.168.49.2 ha-202151 localhost minikube]
	I1217 01:12:38.990427 1198371 provision.go:177] copyRemoteCerts
	I1217 01:12:38.990505 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:12:38.990551 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:39.010279 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:12:39.104097 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:12:39.104157 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:12:39.121228 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:12:39.121290 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1217 01:12:39.138673 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:12:39.138744 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 01:12:39.155924 1198371 provision.go:87] duration metric: took 266.993446ms to configureAuth
	I1217 01:12:39.155951 1198371 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:12:39.156140 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:12:39.156236 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:39.173565 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:12:39.173893 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33913 <nil> <nil>}
	I1217 01:12:39.173914 1198371 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:12:39.468892 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:12:39.468917 1198371 machine.go:97] duration metric: took 4.048326343s to provisionDockerMachine
	I1217 01:12:39.468929 1198371 client.go:176] duration metric: took 9.658759341s to LocalClient.Create
	I1217 01:12:39.468985 1198371 start.go:167] duration metric: took 9.658826727s to libmachine.API.Create "ha-202151"
	I1217 01:12:39.468994 1198371 start.go:293] postStartSetup for "ha-202151" (driver="docker")
	I1217 01:12:39.469005 1198371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:12:39.469098 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:12:39.469147 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:39.486928 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:12:39.580562 1198371 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:12:39.583957 1198371 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:12:39.583989 1198371 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:12:39.584001 1198371 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:12:39.584093 1198371 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:12:39.584179 1198371 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:12:39.584191 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:12:39.584314 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:12:39.591537 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:12:39.608818 1198371 start.go:296] duration metric: took 139.808804ms for postStartSetup
	I1217 01:12:39.609193 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:12:39.626270 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:12:39.626563 1198371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:12:39.626611 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:39.647810 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:12:39.741561 1198371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:12:39.746520 1198371 start.go:128] duration metric: took 9.940067846s to createHost
	I1217 01:12:39.746547 1198371 start.go:83] releasing machines lock for "ha-202151", held for 9.940195907s
	I1217 01:12:39.746624 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:12:39.763351 1198371 ssh_runner.go:195] Run: cat /version.json
	I1217 01:12:39.763409 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:39.763684 1198371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:12:39.763738 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:12:39.786982 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:12:39.794065 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:12:39.969968 1198371 ssh_runner.go:195] Run: systemctl --version
	I1217 01:12:39.976354 1198371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:12:40.039132 1198371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:12:40.048578 1198371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:12:40.048660 1198371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:12:40.087411 1198371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:12:40.087434 1198371 start.go:496] detecting cgroup driver to use...
	I1217 01:12:40.087480 1198371 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:12:40.087535 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:12:40.106764 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:12:40.121121 1198371 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:12:40.121276 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:12:40.141103 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:12:40.160671 1198371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:12:40.293863 1198371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:12:40.419530 1198371 docker.go:234] disabling docker service ...
	I1217 01:12:40.419621 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:12:40.443753 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:12:40.457516 1198371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:12:40.579129 1198371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:12:40.703344 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:12:40.716919 1198371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:12:40.731710 1198371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:12:40.731831 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:12:40.740913 1198371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:12:40.740987 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:12:40.750449 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:12:40.759940 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:12:40.768932 1198371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:12:40.778302 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:12:40.787492 1198371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:12:40.801204 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:12:40.809955 1198371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:12:40.817537 1198371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:12:40.825096 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:12:40.942728 1198371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:12:41.095006 1198371 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:12:41.095131 1198371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:12:41.099067 1198371 start.go:564] Will wait 60s for crictl version
	I1217 01:12:41.099184 1198371 ssh_runner.go:195] Run: which crictl
	I1217 01:12:41.102755 1198371 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:12:41.126791 1198371 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:12:41.126926 1198371 ssh_runner.go:195] Run: crio --version
	I1217 01:12:41.155054 1198371 ssh_runner.go:195] Run: crio --version
	I1217 01:12:41.188291 1198371 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:12:41.191152 1198371 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:12:41.207091 1198371 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:12:41.210870 1198371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:12:41.220672 1198371 kubeadm.go:884] updating cluster {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:12:41.220786 1198371 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:12:41.220846 1198371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:12:41.256285 1198371 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:12:41.256309 1198371 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:12:41.256364 1198371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:12:41.281034 1198371 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:12:41.281058 1198371 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:12:41.281067 1198371 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 01:12:41.281156 1198371 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:12:41.281249 1198371 ssh_runner.go:195] Run: crio config
	I1217 01:12:41.345512 1198371 cni.go:84] Creating CNI manager for ""
	I1217 01:12:41.345584 1198371 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1217 01:12:41.345618 1198371 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:12:41.345671 1198371 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-202151 NodeName:ha-202151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:12:41.345886 1198371 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-202151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:12:41.345928 1198371 kube-vip.go:115] generating kube-vip config ...
	I1217 01:12:41.346016 1198371 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:12:41.358369 1198371 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:12:41.358554 1198371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:12:41.358664 1198371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:12:41.366690 1198371 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:12:41.366767 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 01:12:41.374632 1198371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 01:12:41.388816 1198371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:12:41.402734 1198371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 01:12:41.415684 1198371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1217 01:12:41.428587 1198371 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:12:41.432348 1198371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:12:41.442682 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:12:41.564879 1198371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:12:41.583776 1198371 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.2
	I1217 01:12:41.583847 1198371 certs.go:195] generating shared ca certs ...
	I1217 01:12:41.583879 1198371 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:41.584073 1198371 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:12:41.584156 1198371 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:12:41.584182 1198371 certs.go:257] generating profile certs ...
	I1217 01:12:41.584263 1198371 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:12:41.584293 1198371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt with IP's: []
	I1217 01:12:41.852940 1198371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt ...
	I1217 01:12:41.852973 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt: {Name:mk299e2326f7f8f0d16b9e6a287dee04930c8b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:41.853222 1198371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key ...
	I1217 01:12:41.853244 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key: {Name:mkb8c5238f3ec740a18b936c0aabec3d53256aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:41.853334 1198371 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.e424bc10
	I1217 01:12:41.853359 1198371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.e424bc10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1217 01:12:42.007545 1198371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.e424bc10 ...
	I1217 01:12:42.007587 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.e424bc10: {Name:mkf50b42e24f0e49d5901d627b5fcb643790e258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:42.007824 1198371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.e424bc10 ...
	I1217 01:12:42.007837 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.e424bc10: {Name:mk400d5b416fe6120d4e001823e36ea4178cf7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:42.007933 1198371 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.e424bc10 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:12:42.008034 1198371 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.e424bc10 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:12:42.008100 1198371 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:12:42.008137 1198371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt with IP's: []
	I1217 01:12:42.142602 1198371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt ...
	I1217 01:12:42.142638 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt: {Name:mkc7a306a038454bc2a62c53947cc7b574ee98f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:42.142821 1198371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key ...
	I1217 01:12:42.142833 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key: {Name:mk28743087dc05cf6ef6dc7219ab2888f0a58a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:12:42.142914 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:12:42.142930 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:12:42.142950 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:12:42.142963 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:12:42.142973 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:12:42.142987 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:12:42.143003 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:12:42.143014 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:12:42.143070 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:12:42.143109 1198371 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:12:42.143118 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:12:42.143148 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:12:42.143179 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:12:42.143204 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:12:42.143254 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:12:42.143486 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:12:42.143521 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:12:42.143534 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:12:42.144187 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:12:42.167093 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:12:42.191150 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:12:42.217280 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:12:42.241722 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:12:42.263781 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:12:42.283948 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:12:42.304224 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:12:42.325564 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:12:42.349288 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:12:42.367754 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:12:42.385893 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:12:42.399050 1198371 ssh_runner.go:195] Run: openssl version
	I1217 01:12:42.405359 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:12:42.412611 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:12:42.420142 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:12:42.424235 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:12:42.424310 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:12:42.465565 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:12:42.473218 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1136597.pem /etc/ssl/certs/51391683.0
	I1217 01:12:42.480870 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:12:42.488496 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:12:42.496143 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:12:42.500208 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:12:42.500378 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:12:42.544669 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:12:42.552299 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11365972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:12:42.559883 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:12:42.567542 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:12:42.575331 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:12:42.579083 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:12:42.579146 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:12:42.620280 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:12:42.628019 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:12:42.635746 1198371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:12:42.639638 1198371 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:12:42.639703 1198371 kubeadm.go:401] StartCluster: {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:12:42.639781 1198371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:12:42.639844 1198371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:12:42.667002 1198371 cri.go:89] found id: ""
	I1217 01:12:42.667073 1198371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:12:42.675143 1198371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:12:42.682816 1198371 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:12:42.682934 1198371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:12:42.690808 1198371 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:12:42.690829 1198371 kubeadm.go:158] found existing configuration files:
	
	I1217 01:12:42.690903 1198371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:12:42.698814 1198371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:12:42.698947 1198371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:12:42.706471 1198371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:12:42.714409 1198371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:12:42.714524 1198371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:12:42.722196 1198371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:12:42.730144 1198371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:12:42.730235 1198371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:12:42.737735 1198371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:12:42.745653 1198371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:12:42.745779 1198371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:12:42.753505 1198371 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:12:42.792192 1198371 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 01:12:42.792513 1198371 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:12:42.816582 1198371 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:12:42.816676 1198371 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 01:12:42.816718 1198371 kubeadm.go:319] OS: Linux
	I1217 01:12:42.816772 1198371 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:12:42.816826 1198371 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:12:42.816878 1198371 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:12:42.816931 1198371 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:12:42.816983 1198371 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:12:42.817035 1198371 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:12:42.817084 1198371 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:12:42.817151 1198371 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:12:42.817202 1198371 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:12:42.886380 1198371 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:12:42.886500 1198371 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:12:42.886592 1198371 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:12:42.896841 1198371 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:12:42.903250 1198371 out.go:252]   - Generating certificates and keys ...
	I1217 01:12:42.903361 1198371 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:12:42.903443 1198371 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:12:43.079604 1198371 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:12:43.405847 1198371 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:12:44.262299 1198371 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:12:45.385055 1198371 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:12:46.050129 1198371 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:12:46.050484 1198371 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [ha-202151 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 01:12:46.651542 1198371 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:12:46.651894 1198371 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [ha-202151 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 01:12:46.709767 1198371 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:12:47.655703 1198371 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:12:47.753706 1198371 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:12:47.756803 1198371 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:12:48.265701 1198371 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:12:48.372670 1198371 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:12:48.454591 1198371 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:12:48.996878 1198371 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:12:49.573012 1198371 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:12:49.573822 1198371 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:12:49.576823 1198371 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:12:49.579913 1198371 out.go:252]   - Booting up control plane ...
	I1217 01:12:49.580025 1198371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:12:49.580108 1198371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:12:49.580189 1198371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:12:49.595069 1198371 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:12:49.595178 1198371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:12:49.609374 1198371 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:12:49.610032 1198371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:12:49.610244 1198371 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:12:49.748291 1198371 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:12:49.748412 1198371 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:12:50.752735 1198371 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000881832s
	I1217 01:12:50.752851 1198371 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 01:12:50.752938 1198371 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 01:12:50.753031 1198371 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 01:12:50.753114 1198371 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 01:12:57.721785 1198371 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.969384883s
	I1217 01:12:57.851811 1198371 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.099198327s
	I1217 01:12:58.254559 1198371 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502140658s
	I1217 01:12:58.286856 1198371 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 01:12:58.303684 1198371 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 01:12:58.316850 1198371 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 01:12:58.317098 1198371 kubeadm.go:319] [mark-control-plane] Marking the node ha-202151 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 01:12:58.329642 1198371 kubeadm.go:319] [bootstrap-token] Using token: zr2kqa.1om5f126nx2bwhev
	I1217 01:12:58.332636 1198371 out.go:252]   - Configuring RBAC rules ...
	I1217 01:12:58.332757 1198371 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 01:12:58.337156 1198371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 01:12:58.347837 1198371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 01:12:58.352047 1198371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 01:12:58.356071 1198371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 01:12:58.360462 1198371 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 01:12:58.661789 1198371 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 01:12:59.113482 1198371 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 01:12:59.661540 1198371 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 01:12:59.662710 1198371 kubeadm.go:319] 
	I1217 01:12:59.662813 1198371 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 01:12:59.662825 1198371 kubeadm.go:319] 
	I1217 01:12:59.662931 1198371 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 01:12:59.662944 1198371 kubeadm.go:319] 
	I1217 01:12:59.662975 1198371 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 01:12:59.663035 1198371 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 01:12:59.663088 1198371 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 01:12:59.663092 1198371 kubeadm.go:319] 
	I1217 01:12:59.663146 1198371 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 01:12:59.663150 1198371 kubeadm.go:319] 
	I1217 01:12:59.663203 1198371 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 01:12:59.663207 1198371 kubeadm.go:319] 
	I1217 01:12:59.663259 1198371 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 01:12:59.663333 1198371 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 01:12:59.663401 1198371 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 01:12:59.663405 1198371 kubeadm.go:319] 
	I1217 01:12:59.663489 1198371 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 01:12:59.663565 1198371 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 01:12:59.663569 1198371 kubeadm.go:319] 
	I1217 01:12:59.663653 1198371 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zr2kqa.1om5f126nx2bwhev \
	I1217 01:12:59.663755 1198371 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 \
	I1217 01:12:59.663775 1198371 kubeadm.go:319] 	--control-plane 
	I1217 01:12:59.663779 1198371 kubeadm.go:319] 
	I1217 01:12:59.663863 1198371 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 01:12:59.663867 1198371 kubeadm.go:319] 
	I1217 01:12:59.663949 1198371 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zr2kqa.1om5f126nx2bwhev \
	I1217 01:12:59.664052 1198371 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 
	I1217 01:12:59.667646 1198371 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1217 01:12:59.667863 1198371 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 01:12:59.667970 1198371 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:12:59.667991 1198371 cni.go:84] Creating CNI manager for ""
	I1217 01:12:59.668003 1198371 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1217 01:12:59.670991 1198371 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 01:12:59.673937 1198371 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 01:12:59.678118 1198371 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1217 01:12:59.678138 1198371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1217 01:12:59.691634 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 01:12:59.989689 1198371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 01:12:59.989858 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:12:59.989959 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-202151 minikube.k8s.io/updated_at=2025_12_17T01_12_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=ha-202151 minikube.k8s.io/primary=true
	I1217 01:13:00.293193 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:00.293314 1198371 ops.go:34] apiserver oom_adj: -16
	I1217 01:13:00.793947 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:01.293726 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:01.794144 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:02.293400 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:02.793740 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:03.294024 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:03.794220 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:04.294032 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:04.793312 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:13:04.914751 1198371 kubeadm.go:1114] duration metric: took 4.924965163s to wait for elevateKubeSystemPrivileges
	I1217 01:13:04.914787 1198371 kubeadm.go:403] duration metric: took 22.275088804s to StartCluster
	I1217 01:13:04.914806 1198371 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:13:04.914886 1198371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:13:04.915555 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:13:04.915798 1198371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 01:13:04.915797 1198371 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:13:04.915818 1198371 start.go:242] waiting for startup goroutines ...
	I1217 01:13:04.915827 1198371 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:13:04.915886 1198371 addons.go:70] Setting storage-provisioner=true in profile "ha-202151"
	I1217 01:13:04.915899 1198371 addons.go:239] Setting addon storage-provisioner=true in "ha-202151"
	I1217 01:13:04.915919 1198371 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:13:04.916056 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:13:04.916090 1198371 addons.go:70] Setting default-storageclass=true in profile "ha-202151"
	I1217 01:13:04.916101 1198371 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "ha-202151"
	I1217 01:13:04.916366 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:13:04.916376 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:13:04.953382 1198371 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:13:04.955116 1198371 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:13:04.955839 1198371 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:13:04.955865 1198371 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:13:04.955871 1198371 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:13:04.955875 1198371 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:13:04.955879 1198371 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:13:04.956214 1198371 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:13:04.956227 1198371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 01:13:04.956288 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:13:04.956510 1198371 addons.go:239] Setting addon default-storageclass=true in "ha-202151"
	I1217 01:13:04.956547 1198371 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:13:04.956955 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:13:04.957430 1198371 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 01:13:04.988985 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:13:05.000040 1198371 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 01:13:05.000065 1198371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 01:13:05.000146 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:13:05.031802 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:13:05.151520 1198371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 01:13:05.166168 1198371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:13:05.306169 1198371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 01:13:05.723232 1198371 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 01:13:06.028571 1198371 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 01:13:06.031302 1198371 addons.go:530] duration metric: took 1.115462219s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 01:13:06.031361 1198371 start.go:247] waiting for cluster config update ...
	I1217 01:13:06.031376 1198371 start.go:256] writing updated cluster config ...
	I1217 01:13:06.034668 1198371 out.go:203] 
	I1217 01:13:06.037728 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:13:06.037815 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:13:06.041100 1198371 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	I1217 01:13:06.044033 1198371 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:13:06.047368 1198371 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:13:06.050423 1198371 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:13:06.050480 1198371 cache.go:65] Caching tarball of preloaded images
	I1217 01:13:06.050456 1198371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:13:06.050749 1198371 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:13:06.050765 1198371 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:13:06.050906 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:13:06.074489 1198371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:13:06.074521 1198371 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:13:06.074660 1198371 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:13:06.074692 1198371 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:13:06.075009 1198371 start.go:364] duration metric: took 231.501µs to acquireMachinesLock for "ha-202151-m02"
	I1217 01:13:06.075160 1198371 start.go:93] Provisioning new machine with config: &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:13:06.075342 1198371 start.go:125] createHost starting for "m02" (driver="docker")
	I1217 01:13:06.080976 1198371 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:13:06.081113 1198371 start.go:159] libmachine.API.Create for "ha-202151" (driver="docker")
	I1217 01:13:06.081143 1198371 client.go:173] LocalClient.Create starting
	I1217 01:13:06.081213 1198371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem
	I1217 01:13:06.081252 1198371 main.go:143] libmachine: Decoding PEM data...
	I1217 01:13:06.081280 1198371 main.go:143] libmachine: Parsing certificate...
	I1217 01:13:06.081339 1198371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem
	I1217 01:13:06.081361 1198371 main.go:143] libmachine: Decoding PEM data...
	I1217 01:13:06.081374 1198371 main.go:143] libmachine: Parsing certificate...
	I1217 01:13:06.081634 1198371 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:13:06.098826 1198371 network_create.go:77] Found existing network {name:ha-202151 subnet:0x4001bd5ce0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1217 01:13:06.098980 1198371 kic.go:121] calculated static IP "192.168.49.3" for the "ha-202151-m02" container
	I1217 01:13:06.099064 1198371 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:13:06.121624 1198371 cli_runner.go:164] Run: docker volume create ha-202151-m02 --label name.minikube.sigs.k8s.io=ha-202151-m02 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:13:06.143556 1198371 oci.go:103] Successfully created a docker volume ha-202151-m02
	I1217 01:13:06.143719 1198371 cli_runner.go:164] Run: docker run --rm --name ha-202151-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-202151-m02 --entrypoint /usr/bin/test -v ha-202151-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:13:06.745075 1198371 oci.go:107] Successfully prepared a docker volume ha-202151-m02
	I1217 01:13:06.745129 1198371 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:13:06.745141 1198371 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:13:06.745232 1198371 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-202151-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:13:10.729853 1198371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-202151-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.984572682s)
	I1217 01:13:10.729888 1198371 kic.go:203] duration metric: took 3.984744114s to extract preloaded images to volume ...
	W1217 01:13:10.730023 1198371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:13:10.730141 1198371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:13:10.806622 1198371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-202151-m02 --name ha-202151-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-202151-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-202151-m02 --network ha-202151 --ip 192.168.49.3 --volume ha-202151-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:13:11.129991 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Running}}
	I1217 01:13:11.155621 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:13:11.179529 1198371 cli_runner.go:164] Run: docker exec ha-202151-m02 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:13:11.245125 1198371 oci.go:144] the created container "ha-202151-m02" has a running status.
	I1217 01:13:11.245154 1198371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa...
	I1217 01:13:11.492264 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1217 01:13:11.492316 1198371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:13:11.522098 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:13:11.548166 1198371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:13:11.548189 1198371 kic_runner.go:114] Args: [docker exec --privileged ha-202151-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:13:11.603759 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:13:11.635409 1198371 machine.go:94] provisionDockerMachine start ...
	I1217 01:13:11.635523 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:11.664114 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:13:11.665838 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1217 01:13:11.665867 1198371 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:13:11.666535 1198371 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60294->127.0.0.1:33918: read: connection reset by peer
	I1217 01:13:14.799878 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:13:14.799905 1198371 ubuntu.go:182] provisioning hostname "ha-202151-m02"
	I1217 01:13:14.799980 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:14.821793 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:13:14.822108 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1217 01:13:14.822129 1198371 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
	I1217 01:13:14.967131 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:13:14.967207 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:14.984742 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:13:14.985075 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1217 01:13:14.985098 1198371 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:13:15.148829 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:13:15.148861 1198371 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:13:15.148893 1198371 ubuntu.go:190] setting up certificates
	I1217 01:13:15.148903 1198371 provision.go:84] configureAuth start
	I1217 01:13:15.148964 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:13:15.167288 1198371 provision.go:143] copyHostCerts
	I1217 01:13:15.167339 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:13:15.167373 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:13:15.167384 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:13:15.167468 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:13:15.167614 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:13:15.167641 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:13:15.167659 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:13:15.167699 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:13:15.167762 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:13:15.167788 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:13:15.167794 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:13:15.167840 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:13:15.167895 1198371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
	I1217 01:13:15.634746 1198371 provision.go:177] copyRemoteCerts
	I1217 01:13:15.634815 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:13:15.634865 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:15.654050 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:13:15.751948 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:13:15.752008 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:13:15.773982 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:13:15.774068 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 01:13:15.795880 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:13:15.795982 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:13:15.817184 1198371 provision.go:87] duration metric: took 668.265206ms to configureAuth
	I1217 01:13:15.817212 1198371 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:13:15.817436 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:13:15.817573 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:15.845620 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:13:15.845937 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33918 <nil> <nil>}
	I1217 01:13:15.845958 1198371 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:13:16.125101 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:13:16.125122 1198371 machine.go:97] duration metric: took 4.489686771s to provisionDockerMachine
	I1217 01:13:16.125133 1198371 client.go:176] duration metric: took 10.043983181s to LocalClient.Create
	I1217 01:13:16.125146 1198371 start.go:167] duration metric: took 10.044034585s to libmachine.API.Create "ha-202151"
	I1217 01:13:16.125154 1198371 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
	I1217 01:13:16.125164 1198371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:13:16.125232 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:13:16.125273 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:16.143199 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:13:16.241022 1198371 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:13:16.245714 1198371 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:13:16.245744 1198371 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:13:16.245755 1198371 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:13:16.245836 1198371 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:13:16.245944 1198371 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:13:16.245952 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:13:16.246078 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:13:16.254487 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:13:16.273140 1198371 start.go:296] duration metric: took 147.957711ms for postStartSetup
	I1217 01:13:16.273494 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:13:16.293895 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:13:16.294211 1198371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:13:16.294265 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:16.322715 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:13:16.417426 1198371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:13:16.422963 1198371 start.go:128] duration metric: took 10.347604864s to createHost
	I1217 01:13:16.422987 1198371 start.go:83] releasing machines lock for "ha-202151-m02", held for 10.347964s
	I1217 01:13:16.423056 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:13:16.446993 1198371 out.go:179] * Found network options:
	I1217 01:13:16.449910 1198371 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 01:13:16.452888 1198371 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:13:16.452945 1198371 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 01:13:16.453019 1198371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:13:16.453065 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:16.453098 1198371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:13:16.453153 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:13:16.471636 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:13:16.473407 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:13:16.617147 1198371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:13:16.685565 1198371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:13:16.685649 1198371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:13:16.716612 1198371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:13:16.716648 1198371 start.go:496] detecting cgroup driver to use...
	I1217 01:13:16.716699 1198371 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:13:16.716766 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:13:16.734597 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:13:16.747641 1198371 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:13:16.747750 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:13:16.765411 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:13:16.784859 1198371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:13:16.905038 1198371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:13:17.036945 1198371 docker.go:234] disabling docker service ...
	I1217 01:13:17.037081 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:13:17.059390 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:13:17.073581 1198371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:13:17.196494 1198371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:13:17.318731 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:13:17.331730 1198371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:13:17.345903 1198371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:13:17.345975 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:13:17.355007 1198371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:13:17.355154 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:13:17.364561 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:13:17.373803 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:13:17.382591 1198371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:13:17.390824 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:13:17.399217 1198371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:13:17.413155 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:13:17.421890 1198371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:13:17.429839 1198371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:13:17.437510 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:13:17.563366 1198371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:13:17.739471 1198371 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:13:17.739552 1198371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:13:17.743806 1198371 start.go:564] Will wait 60s for crictl version
	I1217 01:13:17.743870 1198371 ssh_runner.go:195] Run: which crictl
	I1217 01:13:17.747554 1198371 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:13:17.772875 1198371 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:13:17.772994 1198371 ssh_runner.go:195] Run: crio --version
	I1217 01:13:17.807217 1198371 ssh_runner.go:195] Run: crio --version
	I1217 01:13:17.843370 1198371 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:13:17.846242 1198371 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 01:13:17.849222 1198371 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:13:17.865789 1198371 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:13:17.870001 1198371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:13:17.880178 1198371 mustload.go:66] Loading cluster: ha-202151
	I1217 01:13:17.880384 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:13:17.880709 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:13:17.898618 1198371 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:13:17.898932 1198371 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
	I1217 01:13:17.898948 1198371 certs.go:195] generating shared ca certs ...
	I1217 01:13:17.898962 1198371 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:13:17.899109 1198371 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:13:17.899158 1198371 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:13:17.899173 1198371 certs.go:257] generating profile certs ...
	I1217 01:13:17.899259 1198371 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:13:17.899303 1198371 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730
	I1217 01:13:17.899320 1198371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.53e15730 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1217 01:13:18.278829 1198371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.53e15730 ...
	I1217 01:13:18.278860 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.53e15730: {Name:mk2c70ee57af7c29f1ad579e2c2f26fd96c8bddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:13:18.279071 1198371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730 ...
	I1217 01:13:18.279098 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730: {Name:mkd8288a4446479aa76b7c5e23e4812e664ca2dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:13:18.279211 1198371 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.53e15730 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:13:18.279358 1198371 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:13:18.279528 1198371 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:13:18.279550 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:13:18.279567 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:13:18.279579 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:13:18.279592 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:13:18.279604 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:13:18.279622 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:13:18.279638 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:13:18.279648 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:13:18.279704 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:13:18.279739 1198371 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:13:18.279752 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:13:18.279778 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:13:18.279810 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:13:18.279843 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:13:18.279892 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:13:18.279926 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:13:18.279951 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:13:18.279971 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:13:18.280043 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:13:18.297166 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:13:18.388822 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:13:18.392752 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:13:18.400902 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:13:18.404487 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:13:18.412876 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:13:18.416576 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:13:18.425170 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:13:18.428933 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:13:18.438561 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:13:18.442180 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:13:18.450333 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:13:18.453844 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:13:18.462536 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:13:18.481365 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:13:18.500167 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:13:18.520193 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:13:18.538320 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 01:13:18.558246 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:13:18.576438 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:13:18.594744 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:13:18.612599 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:13:18.630280 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:13:18.650015 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:13:18.668070 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:13:18.682160 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:13:18.695019 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:13:18.708850 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:13:18.721818 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:13:18.735029 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:13:18.748518 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:13:18.762760 1198371 ssh_runner.go:195] Run: openssl version
	I1217 01:13:18.769025 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:13:18.776848 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:13:18.785403 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:13:18.791680 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:13:18.791747 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:13:18.833548 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:13:18.841246 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1136597.pem /etc/ssl/certs/51391683.0
	I1217 01:13:18.849511 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:13:18.857415 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:13:18.865344 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:13:18.869334 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:13:18.869401 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:13:18.911389 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:13:18.919635 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11365972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:13:18.927262 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:13:18.934889 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:13:18.942516 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:13:18.946797 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:13:18.946885 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:13:18.989597 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:13:18.997442 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:13:19.006447 1198371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:13:19.010268 1198371 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:13:19.010322 1198371 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1217 01:13:19.010427 1198371 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:13:19.010457 1198371 kube-vip.go:115] generating kube-vip config ...
	I1217 01:13:19.010510 1198371 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:13:19.022885 1198371 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:13:19.022944 1198371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:13:19.023020 1198371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:13:19.031490 1198371 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:13:19.031571 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:13:19.039931 1198371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:13:19.054406 1198371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:13:19.068155 1198371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:13:19.083480 1198371 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:13:19.087879 1198371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:13:19.098629 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:13:19.227818 1198371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:13:19.245434 1198371 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:13:19.245789 1198371 start.go:318] joinCluster: &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:13:19.245936 1198371 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1217 01:13:19.246009 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:13:19.268648 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:13:19.433892 1198371 start.go:344] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:13:19.433995 1198371 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7q61os.5tlw0w2q9n8vglyb --discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-202151-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I1217 01:13:39.972577 1198371 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7q61os.5tlw0w2q9n8vglyb --discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-202151-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (20.538553866s)
	I1217 01:13:39.972655 1198371 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1217 01:13:40.293542 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-202151-m02 minikube.k8s.io/updated_at=2025_12_17T01_13_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=ha-202151 minikube.k8s.io/primary=false
	I1217 01:13:40.410159 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-202151-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1217 01:13:40.540177 1198371 start.go:320] duration metric: took 21.294383902s to joinCluster
	I1217 01:13:40.540235 1198371 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:13:40.540600 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:13:40.543877 1198371 out.go:179] * Verifying Kubernetes components...
	I1217 01:13:40.546688 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:13:40.709070 1198371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:13:40.728117 1198371 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:13:40.728212 1198371 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:13:40.728683 1198371 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
	W1217 01:13:42.732895 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:13:44.733349 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:13:46.733773 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:13:48.733976 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:13:51.232669 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:13:53.732292 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:13:56.232015 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:13:58.233167 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:00.259296 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:02.732017 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:04.741748 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:07.234124 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:09.732677 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:12.231899 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:14.232003 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:16.232298 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:18.232849 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:20.732283 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	W1217 01:14:23.232698 1198371 node_ready.go:57] node "ha-202151-m02" has "Ready":"False" status (will retry)
	I1217 01:14:23.732277 1198371 node_ready.go:49] node "ha-202151-m02" is "Ready"
	I1217 01:14:23.732303 1198371 node_ready.go:38] duration metric: took 43.003594201s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:14:23.732317 1198371 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:14:23.732376 1198371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:14:23.744216 1198371 api_server.go:72] duration metric: took 43.203951339s to wait for apiserver process to appear ...
	I1217 01:14:23.744242 1198371 api_server.go:88] waiting for apiserver healthz status ...
	I1217 01:14:23.744262 1198371 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 01:14:23.752351 1198371 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 01:14:23.753451 1198371 api_server.go:141] control plane version: v1.34.2
	I1217 01:14:23.753480 1198371 api_server.go:131] duration metric: took 9.23022ms to wait for apiserver health ...
	I1217 01:14:23.753513 1198371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:14:23.758901 1198371 system_pods.go:59] 17 kube-system pods found
	I1217 01:14:23.758986 1198371 system_pods.go:61] "coredns-66bc5c9577-4s6qf" [d08a25a9-a22d-4a68-acc7-99caa664092b] Running
	I1217 01:14:23.759011 1198371 system_pods.go:61] "coredns-66bc5c9577-km6lq" [bf3a6983-bb0f-4b64-8193-77fde64b77f6] Running
	I1217 01:14:23.759047 1198371 system_pods.go:61] "etcd-ha-202151" [30b0f05a-7be2-47ee-a93a-67270722470f] Running
	I1217 01:14:23.759070 1198371 system_pods.go:61] "etcd-ha-202151-m02" [deb559c2-a027-4460-95d0-c2e3487e0935] Running
	I1217 01:14:23.759091 1198371 system_pods.go:61] "kindnet-7b5wx" [b89af87c-c3a2-4b6a-9ea7-93332e886e9c] Running
	I1217 01:14:23.759128 1198371 system_pods.go:61] "kindnet-nt6qx" [467d4618-b198-433b-b621-e48d731ced75] Running
	I1217 01:14:23.759157 1198371 system_pods.go:61] "kube-apiserver-ha-202151" [a6242a67-dfc9-4571-9f4a-e56fbc34f173] Running
	I1217 01:14:23.759182 1198371 system_pods.go:61] "kube-apiserver-ha-202151-m02" [03c64d73-e499-4c77-86f9-051b414d95bc] Running
	I1217 01:14:23.759206 1198371 system_pods.go:61] "kube-controller-manager-ha-202151" [ff1af21e-52f8-4ad9-a2fb-2adf064da250] Running
	I1217 01:14:23.759232 1198371 system_pods.go:61] "kube-controller-manager-ha-202151-m02" [906a5fd3-3359-40f4-9e4e-f51fcef2afdf] Running
	I1217 01:14:23.759261 1198371 system_pods.go:61] "kube-proxy-5gdc5" [5189a0d1-4ee1-4205-99ff-4fa3ce427bbf] Running
	I1217 01:14:23.759289 1198371 system_pods.go:61] "kube-proxy-hp525" [eef2fc6f-e8bb-42d3-b25b-573a78f2ba43] Running
	I1217 01:14:23.759311 1198371 system_pods.go:61] "kube-scheduler-ha-202151" [c5eb2777-0d81-4c1a-8f82-ca9d434795b2] Running
	I1217 01:14:23.759335 1198371 system_pods.go:61] "kube-scheduler-ha-202151-m02" [73b3d473-dce1-43db-b4b3-e5ca02c968ea] Running
	I1217 01:14:23.759368 1198371 system_pods.go:61] "kube-vip-ha-202151" [fa07cde0-f52e-4f4b-9b3c-cf7a8ebd5b11] Running
	I1217 01:14:23.759397 1198371 system_pods.go:61] "kube-vip-ha-202151-m02" [07dac75a-3384-49dd-a76a-aa486bd5c4e9] Running
	I1217 01:14:23.759424 1198371 system_pods.go:61] "storage-provisioner" [db1e59c0-7387-4c55-b417-dd3dd6c4a2e0] Running
	I1217 01:14:23.759450 1198371 system_pods.go:74] duration metric: took 5.921643ms to wait for pod list to return data ...
	I1217 01:14:23.759482 1198371 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:14:23.765216 1198371 default_sa.go:45] found service account: "default"
	I1217 01:14:23.765301 1198371 default_sa.go:55] duration metric: took 5.795279ms for default service account to be created ...
	I1217 01:14:23.765318 1198371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:14:23.769132 1198371 system_pods.go:86] 17 kube-system pods found
	I1217 01:14:23.769165 1198371 system_pods.go:89] "coredns-66bc5c9577-4s6qf" [d08a25a9-a22d-4a68-acc7-99caa664092b] Running
	I1217 01:14:23.769173 1198371 system_pods.go:89] "coredns-66bc5c9577-km6lq" [bf3a6983-bb0f-4b64-8193-77fde64b77f6] Running
	I1217 01:14:23.769179 1198371 system_pods.go:89] "etcd-ha-202151" [30b0f05a-7be2-47ee-a93a-67270722470f] Running
	I1217 01:14:23.769183 1198371 system_pods.go:89] "etcd-ha-202151-m02" [deb559c2-a027-4460-95d0-c2e3487e0935] Running
	I1217 01:14:23.769188 1198371 system_pods.go:89] "kindnet-7b5wx" [b89af87c-c3a2-4b6a-9ea7-93332e886e9c] Running
	I1217 01:14:23.769192 1198371 system_pods.go:89] "kindnet-nt6qx" [467d4618-b198-433b-b621-e48d731ced75] Running
	I1217 01:14:23.769196 1198371 system_pods.go:89] "kube-apiserver-ha-202151" [a6242a67-dfc9-4571-9f4a-e56fbc34f173] Running
	I1217 01:14:23.769201 1198371 system_pods.go:89] "kube-apiserver-ha-202151-m02" [03c64d73-e499-4c77-86f9-051b414d95bc] Running
	I1217 01:14:23.769205 1198371 system_pods.go:89] "kube-controller-manager-ha-202151" [ff1af21e-52f8-4ad9-a2fb-2adf064da250] Running
	I1217 01:14:23.769215 1198371 system_pods.go:89] "kube-controller-manager-ha-202151-m02" [906a5fd3-3359-40f4-9e4e-f51fcef2afdf] Running
	I1217 01:14:23.769219 1198371 system_pods.go:89] "kube-proxy-5gdc5" [5189a0d1-4ee1-4205-99ff-4fa3ce427bbf] Running
	I1217 01:14:23.769226 1198371 system_pods.go:89] "kube-proxy-hp525" [eef2fc6f-e8bb-42d3-b25b-573a78f2ba43] Running
	I1217 01:14:23.769230 1198371 system_pods.go:89] "kube-scheduler-ha-202151" [c5eb2777-0d81-4c1a-8f82-ca9d434795b2] Running
	I1217 01:14:23.769234 1198371 system_pods.go:89] "kube-scheduler-ha-202151-m02" [73b3d473-dce1-43db-b4b3-e5ca02c968ea] Running
	I1217 01:14:23.769249 1198371 system_pods.go:89] "kube-vip-ha-202151" [fa07cde0-f52e-4f4b-9b3c-cf7a8ebd5b11] Running
	I1217 01:14:23.769252 1198371 system_pods.go:89] "kube-vip-ha-202151-m02" [07dac75a-3384-49dd-a76a-aa486bd5c4e9] Running
	I1217 01:14:23.769256 1198371 system_pods.go:89] "storage-provisioner" [db1e59c0-7387-4c55-b417-dd3dd6c4a2e0] Running
	I1217 01:14:23.769263 1198371 system_pods.go:126] duration metric: took 3.939313ms to wait for k8s-apps to be running ...
	I1217 01:14:23.769276 1198371 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:14:23.769335 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:14:23.793339 1198371 system_svc.go:56] duration metric: took 24.040012ms WaitForService to wait for kubelet
	I1217 01:14:23.793418 1198371 kubeadm.go:587] duration metric: took 43.253148264s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:14:23.793454 1198371 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:14:23.796531 1198371 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 01:14:23.796611 1198371 node_conditions.go:123] node cpu capacity is 2
	I1217 01:14:23.796640 1198371 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 01:14:23.796665 1198371 node_conditions.go:123] node cpu capacity is 2
	I1217 01:14:23.796706 1198371 node_conditions.go:105] duration metric: took 3.217298ms to run NodePressure ...
	I1217 01:14:23.796736 1198371 start.go:242] waiting for startup goroutines ...
	I1217 01:14:23.796794 1198371 start.go:256] writing updated cluster config ...
	I1217 01:14:23.800148 1198371 out.go:203] 
	I1217 01:14:23.803326 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:14:23.803506 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:14:23.806688 1198371 out.go:179] * Starting "ha-202151-m03" control-plane node in "ha-202151" cluster
	I1217 01:14:23.809616 1198371 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:14:23.812772 1198371 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:14:23.815683 1198371 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:14:23.815787 1198371 cache.go:65] Caching tarball of preloaded images
	I1217 01:14:23.815903 1198371 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:14:23.815914 1198371 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:14:23.816035 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:14:23.815762 1198371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:14:23.836180 1198371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:14:23.836203 1198371 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:14:23.836241 1198371 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:14:23.836270 1198371 start.go:360] acquireMachinesLock for ha-202151-m03: {Name:mkbd1fcc56226146543e3e5b0a84424489015abf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:14:23.836383 1198371 start.go:364] duration metric: took 93.601µs to acquireMachinesLock for "ha-202151-m03"
	I1217 01:14:23.836542 1198371 start.go:93] Provisioning new machine with config: &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:14:23.836658 1198371 start.go:125] createHost starting for "m03" (driver="docker")
	I1217 01:14:23.840188 1198371 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:14:23.840299 1198371 start.go:159] libmachine.API.Create for "ha-202151" (driver="docker")
	I1217 01:14:23.840325 1198371 client.go:173] LocalClient.Create starting
	I1217 01:14:23.840390 1198371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem
	I1217 01:14:23.840484 1198371 main.go:143] libmachine: Decoding PEM data...
	I1217 01:14:23.840508 1198371 main.go:143] libmachine: Parsing certificate...
	I1217 01:14:23.840570 1198371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem
	I1217 01:14:23.840595 1198371 main.go:143] libmachine: Decoding PEM data...
	I1217 01:14:23.840612 1198371 main.go:143] libmachine: Parsing certificate...
	I1217 01:14:23.840853 1198371 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:14:23.869150 1198371 network_create.go:77] Found existing network {name:ha-202151 subnet:0x4001c06bd0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1217 01:14:23.869190 1198371 kic.go:121] calculated static IP "192.168.49.4" for the "ha-202151-m03" container
	I1217 01:14:23.869270 1198371 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:14:23.886949 1198371 cli_runner.go:164] Run: docker volume create ha-202151-m03 --label name.minikube.sigs.k8s.io=ha-202151-m03 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:14:23.906416 1198371 oci.go:103] Successfully created a docker volume ha-202151-m03
	I1217 01:14:23.906502 1198371 cli_runner.go:164] Run: docker run --rm --name ha-202151-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-202151-m03 --entrypoint /usr/bin/test -v ha-202151-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:14:24.484408 1198371 oci.go:107] Successfully prepared a docker volume ha-202151-m03
	I1217 01:14:24.484519 1198371 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:14:24.484539 1198371 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:14:24.484611 1198371 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-202151-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:14:28.493746 1198371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-202151-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.009087877s)
	I1217 01:14:28.493779 1198371 kic.go:203] duration metric: took 4.009235893s to extract preloaded images to volume ...
	W1217 01:14:28.493916 1198371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 01:14:28.494018 1198371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:14:28.557843 1198371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-202151-m03 --name ha-202151-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-202151-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-202151-m03 --network ha-202151 --ip 192.168.49.4 --volume ha-202151-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:14:28.906817 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m03 --format={{.State.Running}}
	I1217 01:14:28.931078 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m03 --format={{.State.Status}}
	I1217 01:14:28.976408 1198371 cli_runner.go:164] Run: docker exec ha-202151-m03 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:14:29.031504 1198371 oci.go:144] the created container "ha-202151-m03" has a running status.
	I1217 01:14:29.031536 1198371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa...
	I1217 01:14:29.164542 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1217 01:14:29.164583 1198371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:14:29.185666 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m03 --format={{.State.Status}}
	I1217 01:14:29.220818 1198371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:14:29.220843 1198371 kic_runner.go:114] Args: [docker exec --privileged ha-202151-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:14:29.277908 1198371 cli_runner.go:164] Run: docker container inspect ha-202151-m03 --format={{.State.Status}}
	I1217 01:14:29.305743 1198371 machine.go:94] provisionDockerMachine start ...
	I1217 01:14:29.305835 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:29.336384 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:14:29.336748 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33923 <nil> <nil>}
	I1217 01:14:29.336765 1198371 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:14:29.338845 1198371 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38604->127.0.0.1:33923: read: connection reset by peer
	I1217 01:14:32.476503 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m03
	
	I1217 01:14:32.476531 1198371 ubuntu.go:182] provisioning hostname "ha-202151-m03"
	I1217 01:14:32.476615 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:32.496202 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:14:32.496628 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33923 <nil> <nil>}
	I1217 01:14:32.496649 1198371 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m03 && echo "ha-202151-m03" | sudo tee /etc/hostname
	I1217 01:14:32.640988 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m03
	
	I1217 01:14:32.641111 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:32.671061 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:14:32.671397 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33923 <nil> <nil>}
	I1217 01:14:32.671418 1198371 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:14:32.809488 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:14:32.809515 1198371 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:14:32.809534 1198371 ubuntu.go:190] setting up certificates
	I1217 01:14:32.809545 1198371 provision.go:84] configureAuth start
	I1217 01:14:32.809638 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m03
	I1217 01:14:32.834739 1198371 provision.go:143] copyHostCerts
	I1217 01:14:32.834801 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:14:32.834836 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:14:32.834853 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:14:32.834936 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:14:32.835022 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:14:32.835044 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:14:32.835052 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:14:32.835079 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:14:32.835124 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:14:32.835143 1198371 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:14:32.835152 1198371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:14:32.835178 1198371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:14:32.835232 1198371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m03 san=[127.0.0.1 192.168.49.4 ha-202151-m03 localhost minikube]
	I1217 01:14:33.016112 1198371 provision.go:177] copyRemoteCerts
	I1217 01:14:33.016183 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:14:33.016228 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:33.036151 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa Username:docker}
	I1217 01:14:33.136343 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:14:33.136424 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:14:33.156608 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:14:33.156729 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:14:33.176588 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:14:33.176671 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:14:33.196866 1198371 provision.go:87] duration metric: took 387.285246ms to configureAuth
	I1217 01:14:33.196893 1198371 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:14:33.197161 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:14:33.197273 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:33.215985 1198371 main.go:143] libmachine: Using SSH client type: native
	I1217 01:14:33.216293 1198371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33923 <nil> <nil>}
	I1217 01:14:33.216307 1198371 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:14:33.586125 1198371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:14:33.586156 1198371 machine.go:97] duration metric: took 4.280386472s to provisionDockerMachine
	I1217 01:14:33.586167 1198371 client.go:176] duration metric: took 9.745830399s to LocalClient.Create
	I1217 01:14:33.586181 1198371 start.go:167] duration metric: took 9.745883174s to libmachine.API.Create "ha-202151"
	I1217 01:14:33.586188 1198371 start.go:293] postStartSetup for "ha-202151-m03" (driver="docker")
	I1217 01:14:33.586199 1198371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:14:33.586265 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:14:33.586319 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:33.606656 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa Username:docker}
	I1217 01:14:33.713438 1198371 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:14:33.717150 1198371 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:14:33.717179 1198371 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:14:33.717192 1198371 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:14:33.717252 1198371 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:14:33.717337 1198371 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:14:33.717356 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:14:33.717463 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:14:33.725770 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:14:33.744480 1198371 start.go:296] duration metric: took 158.276307ms for postStartSetup
	I1217 01:14:33.744923 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m03
	I1217 01:14:33.763809 1198371 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:14:33.764117 1198371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:14:33.764159 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:33.790160 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa Username:docker}
	I1217 01:14:33.886664 1198371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:14:33.891592 1198371 start.go:128] duration metric: took 10.054918694s to createHost
	I1217 01:14:33.891620 1198371 start.go:83] releasing machines lock for "ha-202151-m03", held for 10.055222939s
	I1217 01:14:33.891697 1198371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m03
	I1217 01:14:33.914514 1198371 out.go:179] * Found network options:
	I1217 01:14:33.917459 1198371 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1217 01:14:33.920380 1198371 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:14:33.920411 1198371 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:14:33.920468 1198371 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:14:33.920478 1198371 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 01:14:33.920554 1198371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:14:33.920599 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:33.920936 1198371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:14:33.920997 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:14:33.949579 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa Username:docker}
	I1217 01:14:33.950140 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa Username:docker}
	I1217 01:14:34.094603 1198371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:14:34.158644 1198371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:14:34.158793 1198371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:14:34.197381 1198371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 01:14:34.197458 1198371 start.go:496] detecting cgroup driver to use...
	I1217 01:14:34.197523 1198371 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:14:34.197602 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:14:34.217723 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:14:34.231135 1198371 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:14:34.231213 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:14:34.252092 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:14:34.273118 1198371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:14:34.395028 1198371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:14:34.535437 1198371 docker.go:234] disabling docker service ...
	I1217 01:14:34.535506 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:14:34.558319 1198371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:14:34.573874 1198371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:14:34.713740 1198371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:14:34.866049 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:14:34.881297 1198371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:14:34.895796 1198371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:14:34.895864 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:14:34.905230 1198371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:14:34.905305 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:14:34.915814 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:14:34.926079 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:14:34.935489 1198371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:14:34.944104 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:14:34.953260 1198371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:14:34.967880 1198371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:14:34.977745 1198371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:14:34.985531 1198371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:14:34.993206 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:14:35.116750 1198371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:14:35.319330 1198371 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:14:35.319457 1198371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:14:35.323872 1198371 start.go:564] Will wait 60s for crictl version
	I1217 01:14:35.323987 1198371 ssh_runner.go:195] Run: which crictl
	I1217 01:14:35.328406 1198371 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:14:35.357777 1198371 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:14:35.357947 1198371 ssh_runner.go:195] Run: crio --version
	I1217 01:14:35.388924 1198371 ssh_runner.go:195] Run: crio --version
	I1217 01:14:35.423589 1198371 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:14:35.426512 1198371 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 01:14:35.429322 1198371 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1217 01:14:35.432118 1198371 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:14:35.453197 1198371 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:14:35.457362 1198371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:14:35.468040 1198371 mustload.go:66] Loading cluster: ha-202151
	I1217 01:14:35.468305 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:14:35.468623 1198371 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:14:35.486526 1198371 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:14:35.486835 1198371 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.4
	I1217 01:14:35.486850 1198371 certs.go:195] generating shared ca certs ...
	I1217 01:14:35.486865 1198371 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:14:35.486992 1198371 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:14:35.487044 1198371 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:14:35.487057 1198371 certs.go:257] generating profile certs ...
	I1217 01:14:35.487143 1198371 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:14:35.487180 1198371 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.3dc17a5d
	I1217 01:14:35.487199 1198371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.3dc17a5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1217 01:14:35.670835 1198371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.3dc17a5d ...
	I1217 01:14:35.670866 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.3dc17a5d: {Name:mkfa51e0b655fdae6ea8e9901b439f3ffbe34490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:14:35.671073 1198371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.3dc17a5d ...
	I1217 01:14:35.671090 1198371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.3dc17a5d: {Name:mk1257f0cd65b4315abdf58e71bbf09c9470ab58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:14:35.671189 1198371 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.3dc17a5d -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:14:35.671328 1198371 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.3dc17a5d -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:14:35.671467 1198371 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:14:35.671487 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:14:35.671504 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:14:35.671520 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:14:35.671533 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:14:35.671549 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:14:35.671568 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:14:35.671581 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:14:35.671592 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:14:35.671648 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:14:35.671686 1198371 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:14:35.671698 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:14:35.671725 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:14:35.671752 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:14:35.671780 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:14:35.671827 1198371 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:14:35.671862 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:14:35.671877 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:14:35.671888 1198371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:14:35.671954 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:14:35.694005 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:14:35.788763 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:14:35.793140 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:14:35.801743 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:14:35.805748 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:14:35.815005 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:14:35.818503 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:14:35.827659 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:14:35.831645 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:14:35.841240 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:14:35.844943 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:14:35.853859 1198371 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:14:35.862974 1198371 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:14:35.872909 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:14:35.893050 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:14:35.912175 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:14:35.931847 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:14:35.950547 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1217 01:14:35.969494 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 01:14:35.987689 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:14:36.018477 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:14:36.039769 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:14:36.061666 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:14:36.085029 1198371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:14:36.108998 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:14:36.126221 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:14:36.141009 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:14:36.157725 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:14:36.175055 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:14:36.190908 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:14:36.208531 1198371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:14:36.224070 1198371 ssh_runner.go:195] Run: openssl version
	I1217 01:14:36.230553 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:14:36.238447 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:14:36.246870 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:14:36.250830 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:14:36.250940 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:14:36.295226 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:14:36.305167 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:14:36.313833 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:14:36.322783 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:14:36.330379 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:14:36.334355 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:14:36.334469 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:14:36.376547 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:14:36.384276 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1136597.pem /etc/ssl/certs/51391683.0
	I1217 01:14:36.392405 1198371 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:14:36.399784 1198371 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:14:36.407103 1198371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:14:36.410968 1198371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:14:36.411038 1198371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:14:36.453012 1198371 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:14:36.462288 1198371 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11365972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:14:36.469826 1198371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:14:36.473760 1198371 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:14:36.473812 1198371 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1217 01:14:36.473912 1198371 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:14:36.473940 1198371 kube-vip.go:115] generating kube-vip config ...
	I1217 01:14:36.473990 1198371 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:14:36.486571 1198371 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:14:36.486691 1198371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:14:36.486836 1198371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:14:36.495067 1198371 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:14:36.495188 1198371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:14:36.503149 1198371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:14:36.521264 1198371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:14:36.535676 1198371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:14:36.550825 1198371 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:14:36.554542 1198371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:14:36.565000 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:14:36.682120 1198371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:14:36.706669 1198371 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:14:36.706996 1198371 start.go:318] joinCluster: &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:14:36.707189 1198371 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1217 01:14:36.707259 1198371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:14:36.730798 1198371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:14:36.922895 1198371 start.go:344] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:14:36.922981 1198371 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfzt99.07s8mvvbi66yvb5a --discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-202151-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I1217 01:14:57.283522 1198371 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bfzt99.07s8mvvbi66yvb5a --discovery-token-ca-cert-hash sha256:70484acf63cbe49befdcef68efc1891dd6a9fbe66b77fae4436cd9200ba646e6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-202151-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (20.360518927s)
	I1217 01:14:57.283596 1198371 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1217 01:14:57.664607 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-202151-m03 minikube.k8s.io/updated_at=2025_12_17T01_14_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=ha-202151 minikube.k8s.io/primary=false
	I1217 01:14:57.802442 1198371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-202151-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1217 01:14:57.943406 1198371 start.go:320] duration metric: took 21.236406983s to joinCluster
	I1217 01:14:57.943469 1198371 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:14:57.943848 1198371 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:14:57.946587 1198371 out.go:179] * Verifying Kubernetes components...
	I1217 01:14:57.949581 1198371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:14:58.105778 1198371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:14:58.128131 1198371 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:14:58.128220 1198371 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:14:58.128568 1198371 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m03" to be "Ready" ...
	W1217 01:15:00.187294 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:02.633093 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:05.133163 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:07.133574 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:09.632960 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:11.633136 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:14.131895 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:16.133175 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:18.134817 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:20.632370 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:23.132670 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:25.632280 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:27.633068 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:30.133111 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:32.632491 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:35.132557 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:37.134385 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	W1217 01:15:39.631885 1198371 node_ready.go:57] node "ha-202151-m03" has "Ready":"False" status (will retry)
	I1217 01:15:41.632619 1198371 node_ready.go:49] node "ha-202151-m03" is "Ready"
	I1217 01:15:41.632648 1198371 node_ready.go:38] duration metric: took 43.504055832s for node "ha-202151-m03" to be "Ready" ...
	I1217 01:15:41.632662 1198371 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:15:41.632725 1198371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:15:41.650957 1198371 api_server.go:72] duration metric: took 43.707454211s to wait for apiserver process to appear ...
	I1217 01:15:41.650981 1198371 api_server.go:88] waiting for apiserver healthz status ...
	I1217 01:15:41.651004 1198371 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 01:15:41.661484 1198371 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 01:15:41.662501 1198371 api_server.go:141] control plane version: v1.34.2
	I1217 01:15:41.662525 1198371 api_server.go:131] duration metric: took 11.535459ms to wait for apiserver health ...
	I1217 01:15:41.662535 1198371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:15:41.669000 1198371 system_pods.go:59] 24 kube-system pods found
	I1217 01:15:41.669040 1198371 system_pods.go:61] "coredns-66bc5c9577-4s6qf" [d08a25a9-a22d-4a68-acc7-99caa664092b] Running
	I1217 01:15:41.669049 1198371 system_pods.go:61] "coredns-66bc5c9577-km6lq" [bf3a6983-bb0f-4b64-8193-77fde64b77f6] Running
	I1217 01:15:41.669054 1198371 system_pods.go:61] "etcd-ha-202151" [30b0f05a-7be2-47ee-a93a-67270722470f] Running
	I1217 01:15:41.669082 1198371 system_pods.go:61] "etcd-ha-202151-m02" [deb559c2-a027-4460-95d0-c2e3487e0935] Running
	I1217 01:15:41.669092 1198371 system_pods.go:61] "etcd-ha-202151-m03" [e73fe860-87bd-4901-be8c-ae78a8254fe3] Running
	I1217 01:15:41.669096 1198371 system_pods.go:61] "kindnet-7b5wx" [b89af87c-c3a2-4b6a-9ea7-93332e886e9c] Running
	I1217 01:15:41.669101 1198371 system_pods.go:61] "kindnet-97bs4" [407af8cd-b2f6-4952-aab4-34125a63b793] Running
	I1217 01:15:41.669105 1198371 system_pods.go:61] "kindnet-nt6qx" [467d4618-b198-433b-b621-e48d731ced75] Running
	I1217 01:15:41.669109 1198371 system_pods.go:61] "kube-apiserver-ha-202151" [a6242a67-dfc9-4571-9f4a-e56fbc34f173] Running
	I1217 01:15:41.669113 1198371 system_pods.go:61] "kube-apiserver-ha-202151-m02" [03c64d73-e499-4c77-86f9-051b414d95bc] Running
	I1217 01:15:41.669127 1198371 system_pods.go:61] "kube-apiserver-ha-202151-m03" [f88ec1db-87f1-4c5d-b303-8984719b12af] Running
	I1217 01:15:41.669131 1198371 system_pods.go:61] "kube-controller-manager-ha-202151" [ff1af21e-52f8-4ad9-a2fb-2adf064da250] Running
	I1217 01:15:41.669135 1198371 system_pods.go:61] "kube-controller-manager-ha-202151-m02" [906a5fd3-3359-40f4-9e4e-f51fcef2afdf] Running
	I1217 01:15:41.669139 1198371 system_pods.go:61] "kube-controller-manager-ha-202151-m03" [ee49f2ed-fe64-4901-bb65-d076427f004b] Running
	I1217 01:15:41.669169 1198371 system_pods.go:61] "kube-proxy-5gdc5" [5189a0d1-4ee1-4205-99ff-4fa3ce427bbf] Running
	I1217 01:15:41.669178 1198371 system_pods.go:61] "kube-proxy-gghqw" [4b3ee867-8203-4f30-a67b-426f7f07241a] Running
	I1217 01:15:41.669181 1198371 system_pods.go:61] "kube-proxy-hp525" [eef2fc6f-e8bb-42d3-b25b-573a78f2ba43] Running
	I1217 01:15:41.669185 1198371 system_pods.go:61] "kube-scheduler-ha-202151" [c5eb2777-0d81-4c1a-8f82-ca9d434795b2] Running
	I1217 01:15:41.669188 1198371 system_pods.go:61] "kube-scheduler-ha-202151-m02" [73b3d473-dce1-43db-b4b3-e5ca02c968ea] Running
	I1217 01:15:41.669192 1198371 system_pods.go:61] "kube-scheduler-ha-202151-m03" [e657173b-7f8d-4171-a151-4b6ecc5d9e6f] Running
	I1217 01:15:41.669196 1198371 system_pods.go:61] "kube-vip-ha-202151" [fa07cde0-f52e-4f4b-9b3c-cf7a8ebd5b11] Running
	I1217 01:15:41.669199 1198371 system_pods.go:61] "kube-vip-ha-202151-m02" [07dac75a-3384-49dd-a76a-aa486bd5c4e9] Running
	I1217 01:15:41.669210 1198371 system_pods.go:61] "kube-vip-ha-202151-m03" [27a79240-12e7-4876-ad84-f4ab9dc75bd1] Running
	I1217 01:15:41.669220 1198371 system_pods.go:61] "storage-provisioner" [db1e59c0-7387-4c55-b417-dd3dd6c4a2e0] Running
	I1217 01:15:41.669226 1198371 system_pods.go:74] duration metric: took 6.685333ms to wait for pod list to return data ...
	I1217 01:15:41.669246 1198371 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:15:41.672628 1198371 default_sa.go:45] found service account: "default"
	I1217 01:15:41.672658 1198371 default_sa.go:55] duration metric: took 3.400252ms for default service account to be created ...
	I1217 01:15:41.672668 1198371 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:15:41.679000 1198371 system_pods.go:86] 24 kube-system pods found
	I1217 01:15:41.679042 1198371 system_pods.go:89] "coredns-66bc5c9577-4s6qf" [d08a25a9-a22d-4a68-acc7-99caa664092b] Running
	I1217 01:15:41.679050 1198371 system_pods.go:89] "coredns-66bc5c9577-km6lq" [bf3a6983-bb0f-4b64-8193-77fde64b77f6] Running
	I1217 01:15:41.679055 1198371 system_pods.go:89] "etcd-ha-202151" [30b0f05a-7be2-47ee-a93a-67270722470f] Running
	I1217 01:15:41.679061 1198371 system_pods.go:89] "etcd-ha-202151-m02" [deb559c2-a027-4460-95d0-c2e3487e0935] Running
	I1217 01:15:41.679067 1198371 system_pods.go:89] "etcd-ha-202151-m03" [e73fe860-87bd-4901-be8c-ae78a8254fe3] Running
	I1217 01:15:41.679071 1198371 system_pods.go:89] "kindnet-7b5wx" [b89af87c-c3a2-4b6a-9ea7-93332e886e9c] Running
	I1217 01:15:41.679076 1198371 system_pods.go:89] "kindnet-97bs4" [407af8cd-b2f6-4952-aab4-34125a63b793] Running
	I1217 01:15:41.679081 1198371 system_pods.go:89] "kindnet-nt6qx" [467d4618-b198-433b-b621-e48d731ced75] Running
	I1217 01:15:41.679086 1198371 system_pods.go:89] "kube-apiserver-ha-202151" [a6242a67-dfc9-4571-9f4a-e56fbc34f173] Running
	I1217 01:15:41.679090 1198371 system_pods.go:89] "kube-apiserver-ha-202151-m02" [03c64d73-e499-4c77-86f9-051b414d95bc] Running
	I1217 01:15:41.679094 1198371 system_pods.go:89] "kube-apiserver-ha-202151-m03" [f88ec1db-87f1-4c5d-b303-8984719b12af] Running
	I1217 01:15:41.679099 1198371 system_pods.go:89] "kube-controller-manager-ha-202151" [ff1af21e-52f8-4ad9-a2fb-2adf064da250] Running
	I1217 01:15:41.679108 1198371 system_pods.go:89] "kube-controller-manager-ha-202151-m02" [906a5fd3-3359-40f4-9e4e-f51fcef2afdf] Running
	I1217 01:15:41.679113 1198371 system_pods.go:89] "kube-controller-manager-ha-202151-m03" [ee49f2ed-fe64-4901-bb65-d076427f004b] Running
	I1217 01:15:41.679122 1198371 system_pods.go:89] "kube-proxy-5gdc5" [5189a0d1-4ee1-4205-99ff-4fa3ce427bbf] Running
	I1217 01:15:41.679127 1198371 system_pods.go:89] "kube-proxy-gghqw" [4b3ee867-8203-4f30-a67b-426f7f07241a] Running
	I1217 01:15:41.679139 1198371 system_pods.go:89] "kube-proxy-hp525" [eef2fc6f-e8bb-42d3-b25b-573a78f2ba43] Running
	I1217 01:15:41.679144 1198371 system_pods.go:89] "kube-scheduler-ha-202151" [c5eb2777-0d81-4c1a-8f82-ca9d434795b2] Running
	I1217 01:15:41.679148 1198371 system_pods.go:89] "kube-scheduler-ha-202151-m02" [73b3d473-dce1-43db-b4b3-e5ca02c968ea] Running
	I1217 01:15:41.679154 1198371 system_pods.go:89] "kube-scheduler-ha-202151-m03" [e657173b-7f8d-4171-a151-4b6ecc5d9e6f] Running
	I1217 01:15:41.679161 1198371 system_pods.go:89] "kube-vip-ha-202151" [fa07cde0-f52e-4f4b-9b3c-cf7a8ebd5b11] Running
	I1217 01:15:41.679165 1198371 system_pods.go:89] "kube-vip-ha-202151-m02" [07dac75a-3384-49dd-a76a-aa486bd5c4e9] Running
	I1217 01:15:41.679170 1198371 system_pods.go:89] "kube-vip-ha-202151-m03" [27a79240-12e7-4876-ad84-f4ab9dc75bd1] Running
	I1217 01:15:41.679174 1198371 system_pods.go:89] "storage-provisioner" [db1e59c0-7387-4c55-b417-dd3dd6c4a2e0] Running
	I1217 01:15:41.679180 1198371 system_pods.go:126] duration metric: took 6.507041ms to wait for k8s-apps to be running ...
	I1217 01:15:41.679191 1198371 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:15:41.679253 1198371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:15:41.702954 1198371 system_svc.go:56] duration metric: took 23.752564ms WaitForService to wait for kubelet
	I1217 01:15:41.702988 1198371 kubeadm.go:587] duration metric: took 43.759489564s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:15:41.703008 1198371 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:15:41.708074 1198371 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 01:15:41.708106 1198371 node_conditions.go:123] node cpu capacity is 2
	I1217 01:15:41.708121 1198371 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 01:15:41.708126 1198371 node_conditions.go:123] node cpu capacity is 2
	I1217 01:15:41.708130 1198371 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 01:15:41.708134 1198371 node_conditions.go:123] node cpu capacity is 2
	I1217 01:15:41.708139 1198371 node_conditions.go:105] duration metric: took 5.12631ms to run NodePressure ...
	I1217 01:15:41.708153 1198371 start.go:242] waiting for startup goroutines ...
	I1217 01:15:41.708182 1198371 start.go:256] writing updated cluster config ...
	I1217 01:15:41.708568 1198371 ssh_runner.go:195] Run: rm -f paused
	I1217 01:15:41.712765 1198371 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:15:41.713320 1198371 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:15:41.733499 1198371 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4s6qf" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.740277 1198371 pod_ready.go:94] pod "coredns-66bc5c9577-4s6qf" is "Ready"
	I1217 01:15:41.740312 1198371 pod_ready.go:86] duration metric: took 6.784038ms for pod "coredns-66bc5c9577-4s6qf" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.740325 1198371 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-km6lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.747386 1198371 pod_ready.go:94] pod "coredns-66bc5c9577-km6lq" is "Ready"
	I1217 01:15:41.747456 1198371 pod_ready.go:86] duration metric: took 7.12318ms for pod "coredns-66bc5c9577-km6lq" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.751447 1198371 pod_ready.go:83] waiting for pod "etcd-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.760194 1198371 pod_ready.go:94] pod "etcd-ha-202151" is "Ready"
	I1217 01:15:41.760234 1198371 pod_ready.go:86] duration metric: took 8.720388ms for pod "etcd-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.760244 1198371 pod_ready.go:83] waiting for pod "etcd-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.774147 1198371 pod_ready.go:94] pod "etcd-ha-202151-m02" is "Ready"
	I1217 01:15:41.774172 1198371 pod_ready.go:86] duration metric: took 13.92106ms for pod "etcd-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.774182 1198371 pod_ready.go:83] waiting for pod "etcd-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:41.914348 1198371 request.go:683] "Waited before sending request" delay="140.081999ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-202151-m03"
	I1217 01:15:42.113877 1198371 request.go:683] "Waited before sending request" delay="196.280957ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m03"
	I1217 01:15:42.118942 1198371 pod_ready.go:94] pod "etcd-ha-202151-m03" is "Ready"
	I1217 01:15:42.118977 1198371 pod_ready.go:86] duration metric: took 344.786696ms for pod "etcd-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:42.314305 1198371 request.go:683] "Waited before sending request" delay="195.165698ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1217 01:15:42.319589 1198371 pod_ready.go:83] waiting for pod "kube-apiserver-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:42.513987 1198371 request.go:683] "Waited before sending request" delay="194.270583ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-202151"
	I1217 01:15:42.714243 1198371 request.go:683] "Waited before sending request" delay="196.331508ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151"
	I1217 01:15:42.718743 1198371 pod_ready.go:94] pod "kube-apiserver-ha-202151" is "Ready"
	I1217 01:15:42.718775 1198371 pod_ready.go:86] duration metric: took 399.153097ms for pod "kube-apiserver-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:42.718785 1198371 pod_ready.go:83] waiting for pod "kube-apiserver-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:42.914138 1198371 request.go:683] "Waited before sending request" delay="195.266823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-202151-m02"
	I1217 01:15:43.113905 1198371 request.go:683] "Waited before sending request" delay="195.245942ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m02"
	I1217 01:15:43.117533 1198371 pod_ready.go:94] pod "kube-apiserver-ha-202151-m02" is "Ready"
	I1217 01:15:43.117563 1198371 pod_ready.go:86] duration metric: took 398.770659ms for pod "kube-apiserver-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:43.117574 1198371 pod_ready.go:83] waiting for pod "kube-apiserver-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:43.314001 1198371 request.go:683] "Waited before sending request" delay="196.336358ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-202151-m03"
	I1217 01:15:43.513796 1198371 request.go:683] "Waited before sending request" delay="196.250346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m03"
	I1217 01:15:43.517052 1198371 pod_ready.go:94] pod "kube-apiserver-ha-202151-m03" is "Ready"
	I1217 01:15:43.517079 1198371 pod_ready.go:86] duration metric: took 399.498067ms for pod "kube-apiserver-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:43.714540 1198371 request.go:683] "Waited before sending request" delay="197.344298ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1217 01:15:43.718540 1198371 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:43.913833 1198371 request.go:683] "Waited before sending request" delay="195.123591ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-202151"
	I1217 01:15:44.114391 1198371 request.go:683] "Waited before sending request" delay="197.235305ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151"
	I1217 01:15:44.117910 1198371 pod_ready.go:94] pod "kube-controller-manager-ha-202151" is "Ready"
	I1217 01:15:44.117938 1198371 pod_ready.go:86] duration metric: took 399.324025ms for pod "kube-controller-manager-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:44.117950 1198371 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:44.314383 1198371 request.go:683] "Waited before sending request" delay="196.359424ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-202151-m02"
	I1217 01:15:44.514060 1198371 request.go:683] "Waited before sending request" delay="196.304935ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m02"
	I1217 01:15:44.517491 1198371 pod_ready.go:94] pod "kube-controller-manager-ha-202151-m02" is "Ready"
	I1217 01:15:44.517520 1198371 pod_ready.go:86] duration metric: took 399.563256ms for pod "kube-controller-manager-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:44.517533 1198371 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:44.713795 1198371 request.go:683] "Waited before sending request" delay="196.187793ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-202151-m03"
	I1217 01:15:44.913882 1198371 request.go:683] "Waited before sending request" delay="196.211998ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m03"
	I1217 01:15:44.917272 1198371 pod_ready.go:94] pod "kube-controller-manager-ha-202151-m03" is "Ready"
	I1217 01:15:44.917300 1198371 pod_ready.go:86] duration metric: took 399.760396ms for pod "kube-controller-manager-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:45.113908 1198371 request.go:683] "Waited before sending request" delay="196.490727ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1217 01:15:45.126934 1198371 pod_ready.go:83] waiting for pod "kube-proxy-5gdc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:45.314501 1198371 request.go:683] "Waited before sending request" delay="187.304358ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5gdc5"
	I1217 01:15:45.513811 1198371 request.go:683] "Waited before sending request" delay="195.208284ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151"
	I1217 01:15:45.517417 1198371 pod_ready.go:94] pod "kube-proxy-5gdc5" is "Ready"
	I1217 01:15:45.517514 1198371 pod_ready.go:86] duration metric: took 390.451776ms for pod "kube-proxy-5gdc5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:45.517546 1198371 pod_ready.go:83] waiting for pod "kube-proxy-gghqw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:45.714049 1198371 request.go:683] "Waited before sending request" delay="196.378082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gghqw"
	I1217 01:15:45.914203 1198371 request.go:683] "Waited before sending request" delay="196.29527ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m03"
	I1217 01:15:45.917151 1198371 pod_ready.go:94] pod "kube-proxy-gghqw" is "Ready"
	I1217 01:15:45.917187 1198371 pod_ready.go:86] duration metric: took 399.597391ms for pod "kube-proxy-gghqw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:45.917199 1198371 pod_ready.go:83] waiting for pod "kube-proxy-hp525" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:46.114662 1198371 request.go:683] "Waited before sending request" delay="197.326127ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hp525"
	I1217 01:15:46.314781 1198371 request.go:683] "Waited before sending request" delay="196.339602ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m02"
	I1217 01:15:46.317786 1198371 pod_ready.go:94] pod "kube-proxy-hp525" is "Ready"
	I1217 01:15:46.317814 1198371 pod_ready.go:86] duration metric: took 400.608549ms for pod "kube-proxy-hp525" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:46.514073 1198371 request.go:683] "Waited before sending request" delay="196.144768ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1217 01:15:46.524187 1198371 pod_ready.go:83] waiting for pod "kube-scheduler-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:46.714629 1198371 request.go:683] "Waited before sending request" delay="190.317743ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-202151"
	I1217 01:15:46.914172 1198371 request.go:683] "Waited before sending request" delay="196.208833ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151"
	I1217 01:15:46.917835 1198371 pod_ready.go:94] pod "kube-scheduler-ha-202151" is "Ready"
	I1217 01:15:46.917914 1198371 pod_ready.go:86] duration metric: took 393.697507ms for pod "kube-scheduler-ha-202151" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:46.917933 1198371 pod_ready.go:83] waiting for pod "kube-scheduler-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:47.114342 1198371 request.go:683] "Waited before sending request" delay="196.330479ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-202151-m02"
	I1217 01:15:47.314195 1198371 request.go:683] "Waited before sending request" delay="196.324391ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m02"
	I1217 01:15:47.317689 1198371 pod_ready.go:94] pod "kube-scheduler-ha-202151-m02" is "Ready"
	I1217 01:15:47.317716 1198371 pod_ready.go:86] duration metric: took 399.77623ms for pod "kube-scheduler-ha-202151-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:47.317727 1198371 pod_ready.go:83] waiting for pod "kube-scheduler-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:47.514237 1198371 request.go:683] "Waited before sending request" delay="196.380439ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-202151-m03"
	I1217 01:15:47.714354 1198371 request.go:683] "Waited before sending request" delay="186.229927ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-202151-m03"
	I1217 01:15:47.717612 1198371 pod_ready.go:94] pod "kube-scheduler-ha-202151-m03" is "Ready"
	I1217 01:15:47.717640 1198371 pod_ready.go:86] duration metric: took 399.905925ms for pod "kube-scheduler-ha-202151-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:15:47.717654 1198371 pod_ready.go:40] duration metric: took 6.004835921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:15:47.772131 1198371 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1217 01:15:47.775434 1198371 out.go:179] * Done! kubectl is now configured to use "ha-202151" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 01:13:46 ha-202151 crio[839]: time="2025-12-17T01:13:46.594272947Z" level=info msg="Starting container: 8002ac89817f6e75a2934336c7f0c8bc02421d99c1eea48665fe3b49483a172e" id=d91a2c98-dfff-4255-9857-4c5cba93f3f3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:13:46 ha-202151 crio[839]: time="2025-12-17T01:13:46.596308099Z" level=info msg="Started container" PID=1858 containerID=8002ac89817f6e75a2934336c7f0c8bc02421d99c1eea48665fe3b49483a172e description=kube-system/coredns-66bc5c9577-4s6qf/coredns id=d91a2c98-dfff-4255-9857-4c5cba93f3f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa00cae32b1368f4ee71ce36efdcbcfd6b9ed0e8313d99fdf304dc100fb5a027
	Dec 17 01:13:46 ha-202151 crio[839]: time="2025-12-17T01:13:46.608121663Z" level=info msg="Started container" PID=1846 containerID=c4b501012bc7ddcd682df1580de5e1f19a7d3ab86e0cf4a2a91ae7b31d94bd86 description=kube-system/coredns-66bc5c9577-km6lq/coredns id=b363ccbf-193c-4a05-a8c3-fbd6c2d44bb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ec7366429343c42492f608e151af8b1672a3dae18fda4fd1cd380f890839fbb
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.507205762Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-hw4rm/POD" id=ae452ee9-d9ae-4f7f-817f-ff6abf4d77da name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.507326317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.542666549Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-hw4rm Namespace:default ID:8e9ad7ffec46afb91809583d6e8ed02b1b1247307171b28fbc4156a48b021532 UID:d3764386-d2d6-4c64-89ee-996e807ed605 NetNS:/var/run/netns/3df853e4-4a8a-4d44-a980-7a5d67155ba0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079b50}] Aliases:map[]}"
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.542811776Z" level=info msg="Adding pod default_busybox-7b57f96db7-hw4rm to CNI network \"kindnet\" (type=ptp)"
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.571801107Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-hw4rm Namespace:default ID:8e9ad7ffec46afb91809583d6e8ed02b1b1247307171b28fbc4156a48b021532 UID:d3764386-d2d6-4c64-89ee-996e807ed605 NetNS:/var/run/netns/3df853e4-4a8a-4d44-a980-7a5d67155ba0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079b50}] Aliases:map[]}"
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.571984101Z" level=info msg="Checking pod default_busybox-7b57f96db7-hw4rm for CNI network kindnet (type=ptp)"
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.591793519Z" level=info msg="Ran pod sandbox 8e9ad7ffec46afb91809583d6e8ed02b1b1247307171b28fbc4156a48b021532 with infra container: default/busybox-7b57f96db7-hw4rm/POD" id=ae452ee9-d9ae-4f7f-817f-ff6abf4d77da name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.600331398Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1440cd6a-9947-4b36-a515-7e022ffe22e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.600514244Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1440cd6a-9947-4b36-a515-7e022ffe22e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.600562448Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=1440cd6a-9947-4b36-a515-7e022ffe22e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.611735217Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=e075699d-fd1a-42d8-ba65-e6ef53de4d6b name=/runtime.v1.ImageService/PullImage
	Dec 17 01:15:49 ha-202151 crio[839]: time="2025-12-17T01:15:49.624029105Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.801224724Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=e075699d-fd1a-42d8-ba65-e6ef53de4d6b name=/runtime.v1.ImageService/PullImage
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.802301922Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=7c6d0333-d2c2-4cc6-b612-ec7aa872049c name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.804204796Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=19a1ae1c-e916-494d-a852-7c1f9591b373 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.81247855Z" level=info msg="Creating container: default/busybox-7b57f96db7-hw4rm/busybox" id=74484226-25f0-4ae1-9030-da646a803999 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.812898477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.829709339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.830891157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.865236553Z" level=info msg="Created container dc9e40097b78d73824ea8d9218bec9bd4afbf97019d0271f52f9484514929e5a: default/busybox-7b57f96db7-hw4rm/busybox" id=74484226-25f0-4ae1-9030-da646a803999 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.867677201Z" level=info msg="Starting container: dc9e40097b78d73824ea8d9218bec9bd4afbf97019d0271f52f9484514929e5a" id=06242a73-1251-49c9-b426-67cc2c70c691 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:15:51 ha-202151 crio[839]: time="2025-12-17T01:15:51.874182363Z" level=info msg="Started container" PID=2013 containerID=dc9e40097b78d73824ea8d9218bec9bd4afbf97019d0271f52f9484514929e5a description=default/busybox-7b57f96db7-hw4rm/busybox id=06242a73-1251-49c9-b426-67cc2c70c691 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e9ad7ffec46afb91809583d6e8ed02b1b1247307171b28fbc4156a48b021532
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	dc9e40097b78d       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   9 minutes ago       Running             busybox                   0                   8e9ad7ffec46a       busybox-7b57f96db7-hw4rm            default
	8002ac89817f6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      11 minutes ago      Running             coredns                   0                   fa00cae32b136       coredns-66bc5c9577-4s6qf            kube-system
	c4b501012bc7d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      11 minutes ago      Running             coredns                   0                   2ec7366429343       coredns-66bc5c9577-km6lq            kube-system
	efad5cac39643       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      11 minutes ago      Running             storage-provisioner       0                   f3eeb3fb3cc20       storage-provisioner                 kube-system
	290c4b50b64c8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      12 minutes ago      Running             kindnet-cni               0                   61f80fdec2bab       kindnet-7b5wx                       kube-system
	5e6cf9714dc97       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      12 minutes ago      Running             kube-proxy                0                   90549251203c6       kube-proxy-5gdc5                    kube-system
	6c6e08e158c00       ghcr.io/kube-vip/kube-vip@sha256:74581ff5ab80d8bd25e525d4066eb06614fd65c953d7a38e710a59d42399d439     12 minutes ago      Running             kube-vip                  0                   4e8d8b98d1ca9       kube-vip-ha-202151                  kube-system
	f6a6811bfb229       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      12 minutes ago      Running             kube-controller-manager   0                   5dad2a8035dd3       kube-controller-manager-ha-202151   kube-system
	69657d7ff56d6       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      12 minutes ago      Running             etcd                      0                   eaa18240e0073       etcd-ha-202151                      kube-system
	6e359320adae3       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      12 minutes ago      Running             kube-scheduler            0                   544dd7eb61b7d       kube-scheduler-ha-202151            kube-system
	3b62247239d54       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      12 minutes ago      Running             kube-apiserver            0                   4073ea64c0b61       kube-apiserver-ha-202151            kube-system
	
	
	==> coredns [8002ac89817f6e75a2934336c7f0c8bc02421d99c1eea48665fe3b49483a172e] <==
	[INFO] 10.244.1.2:49274 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001605249s
	[INFO] 10.244.1.2:50026 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111381s
	[INFO] 10.244.1.2:44486 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101913s
	[INFO] 10.244.1.2:49588 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105687s
	[INFO] 10.244.2.2:58906 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001077583s
	[INFO] 10.244.2.2:53381 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142068s
	[INFO] 10.244.2.2:43387 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001031308s
	[INFO] 10.244.2.2:35622 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145129s
	[INFO] 10.244.0.4:42809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115213s
	[INFO] 10.244.0.4:41217 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015376s
	[INFO] 10.244.0.4:53891 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001099072s
	[INFO] 10.244.0.4:59643 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156238s
	[INFO] 10.244.1.2:40770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190034s
	[INFO] 10.244.2.2:46153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000409704s
	[INFO] 10.244.2.2:60926 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001201518s
	[INFO] 10.244.0.4:59731 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159151s
	[INFO] 10.244.0.4:51760 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088843s
	[INFO] 10.244.1.2:49841 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151873s
	[INFO] 10.244.2.2:56801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130852s
	[INFO] 10.244.2.2:52928 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121432s
	[INFO] 10.244.2.2:46174 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133592s
	[INFO] 10.244.0.4:35651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100051s
	[INFO] 10.244.0.4:59993 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050337s
	[INFO] 10.244.0.4:59805 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062677s
	[INFO] 10.244.0.4:37402 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083091s
	
	
	==> coredns [c4b501012bc7ddcd682df1580de5e1f19a7d3ab86e0cf4a2a91ae7b31d94bd86] <==
	[INFO] 127.0.0.1:58598 - 41575 "HINFO IN 2703696707173001861.1750452503372442765. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.058553642s
	[INFO] 10.244.1.2:58398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197796s
	[INFO] 10.244.2.2:51145 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.001080587s
	[INFO] 10.244.2.2:59613 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.001325577s
	[INFO] 10.244.0.4:38714 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151307s
	[INFO] 10.244.0.4:33285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.0000857s
	[INFO] 10.244.2.2:56594 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153218s
	[INFO] 10.244.2.2:47523 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144841s
	[INFO] 10.244.2.2:45768 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194628s
	[INFO] 10.244.2.2:46416 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173526s
	[INFO] 10.244.0.4:38855 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001107744s
	[INFO] 10.244.0.4:47614 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177768s
	[INFO] 10.244.0.4:54265 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120497s
	[INFO] 10.244.0.4:57946 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057508s
	[INFO] 10.244.1.2:57669 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197238s
	[INFO] 10.244.1.2:37380 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174042s
	[INFO] 10.244.1.2:44798 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091304s
	[INFO] 10.244.2.2:36488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000508245s
	[INFO] 10.244.2.2:49103 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000242898s
	[INFO] 10.244.0.4:60747 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088654s
	[INFO] 10.244.0.4:49094 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143454s
	[INFO] 10.244.1.2:46463 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118512s
	[INFO] 10.244.1.2:48296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128038s
	[INFO] 10.244.1.2:57242 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101159s
	[INFO] 10.244.2.2:57884 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120054s
	
	
	==> describe nodes <==
	Name:               ha-202151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T01_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:25:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:22:50 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:22:50 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:22:50 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:22:50 +0000   Wed, 17 Dec 2025 01:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-202151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                7edb1e1f-1b17-415f-9229-48ba3527eefe
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hw4rm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	  kube-system                 coredns-66bc5c9577-4s6qf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-66bc5c9577-km6lq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-ha-202151                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-7b5wx                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-202151             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-202151    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5gdc5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-202151             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-202151                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-202151 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	
	
	Name:               ha-202151-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_13_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:13:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:17:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:16:02 +0000   Wed, 17 Dec 2025 01:18:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:16:02 +0000   Wed, 17 Dec 2025 01:18:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:16:02 +0000   Wed, 17 Dec 2025 01:18:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:16:02 +0000   Wed, 17 Dec 2025 01:18:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-202151-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                04eb29d0-5ea5-46d1-ae46-afe3ee374602
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-d62f7                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kube-system                 etcd-ha-202151-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-nt6qx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-202151-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-202151-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-hp525                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-202151-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-202151-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal  RegisteredNode  10m    node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal  NodeNotReady    7m11s  node-controller  Node ha-202151-m02 status is now: NodeNotReady
	
	
	Name:               ha-202151-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_14_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:14:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:25:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:24:58 +0000   Wed, 17 Dec 2025 01:14:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:24:58 +0000   Wed, 17 Dec 2025 01:14:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:24:58 +0000   Wed, 17 Dec 2025 01:14:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:24:58 +0000   Wed, 17 Dec 2025 01:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-202151-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                99616620-1dc9-48f4-ad6d-0e01cae8525e
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fcp4p                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kube-system                 etcd-ha-202151-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-97bs4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-202151-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-202151-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-gghqw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-202151-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-202151-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        10m   kube-proxy       
	  Normal  RegisteredNode  10m   node-controller  Node ha-202151-m03 event: Registered Node ha-202151-m03 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node ha-202151-m03 event: Registered Node ha-202151-m03 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node ha-202151-m03 event: Registered Node ha-202151-m03 in Controller
	
	
	Name:               ha-202151-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:16:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:25:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:25:14 +0000   Wed, 17 Dec 2025 01:16:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:25:14 +0000   Wed, 17 Dec 2025 01:16:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:25:14 +0000   Wed, 17 Dec 2025 01:16:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:25:14 +0000   Wed, 17 Dec 2025 01:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-202151-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                84c842f9-c3a2-4245-b176-e32c4cbe3e2c
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2d7p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kindnet-cntp7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m3s
	  kube-system                 kube-proxy-kqgdw            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m                   kube-proxy       
	  Normal  RegisteredNode           9m3s                 node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal  CIDRAssignmentFailed     9m3s                 cidrAllocator    Node ha-202151-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  9m3s (x3 over 9m4s)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m3s (x3 over 9m4s)  kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m3s (x3 over 9m4s)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m1s                 node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal  RegisteredNode           9m                   node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal  NodeReady                8m21s                kubelet          Node ha-202151-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +41.693215] overlayfs: idmapped layers are currently not supported
	[Dec16 23:55] overlayfs: idmapped layers are currently not supported
	[Dec16 23:56] overlayfs: idmapped layers are currently not supported
	[  +2.818318] overlayfs: idmapped layers are currently not supported
	[Dec16 23:58] overlayfs: idmapped layers are currently not supported
	[  +5.205427] overlayfs: idmapped layers are currently not supported
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	[Dec17 01:12] overlayfs: idmapped layers are currently not supported
	[Dec17 01:13] overlayfs: idmapped layers are currently not supported
	[Dec17 01:14] overlayfs: idmapped layers are currently not supported
	[Dec17 01:16] overlayfs: idmapped layers are currently not supported
	[Dec17 01:17] overlayfs: idmapped layers are currently not supported
	[Dec17 01:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [69657d7ff56d63b015d010a46ad51e1a9b51ec1028b1ddb489530e4a3e11557a] <==
	{"level":"warn","ts":"2025-12-17T01:24:46.256741Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"424f2d1540744ac2","rtt":"1.560785ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:49.144079Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:49.144136Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:51.256937Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"424f2d1540744ac2","rtt":"18.613748ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:51.256986Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"424f2d1540744ac2","rtt":"1.560785ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:53.145974Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:53.146030Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:56.257137Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"424f2d1540744ac2","rtt":"1.560785ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:56.257149Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"424f2d1540744ac2","rtt":"18.613748ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:57.147736Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:24:57.147792Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:01.149121Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:01.149177Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:01.257231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"424f2d1540744ac2","rtt":"1.560785ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:01.257300Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"424f2d1540744ac2","rtt":"18.613748ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:05.150781Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:05.150979Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:06.257800Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"424f2d1540744ac2","rtt":"1.560785ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:06.257994Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"424f2d1540744ac2","rtt":"18.613748ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:09.152808Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:09.152869Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:11.258372Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"424f2d1540744ac2","rtt":"1.560785ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:11.258402Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"424f2d1540744ac2","rtt":"18.613748ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:13.154417Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-17T01:25:13.154474Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"424f2d1540744ac2","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 01:25:15 up  7:07,  0 user,  load average: 0.35, 1.08, 1.21
	Linux ha-202151 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [290c4b50b64c8fd55c23d0c75fcb8736109775164f274f320c97deaed4932e9a] <==
	I1217 01:24:35.823614       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:24:45.828498       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:24:45.828537       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:24:45.828710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:24:45.828726       1 main.go:301] handling current node
	I1217 01:24:45.828741       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:24:45.828745       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:24:45.828802       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 01:24:45.828813       1 main.go:324] Node ha-202151-m03 has CIDR [10.244.2.0/24] 
	I1217 01:24:55.829130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:24:55.829162       1 main.go:301] handling current node
	I1217 01:24:55.829178       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:24:55.829184       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:24:55.829332       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 01:24:55.829343       1 main.go:324] Node ha-202151-m03 has CIDR [10.244.2.0/24] 
	I1217 01:24:55.829398       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:24:55.829411       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:25:05.822980       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:25:05.823113       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:25:05.823338       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:25:05.823382       1 main.go:301] handling current node
	I1217 01:25:05.823420       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:25:05.823454       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:25:05.823561       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1217 01:25:05.823600       1 main.go:324] Node ha-202151-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3b62247239d54a0df909adfa213036293649bc4e77fc2ca825b350ee0dca8e46] <==
	I1217 01:12:57.996207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 01:12:58.127779       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 01:12:58.135821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1217 01:12:58.137152       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 01:12:58.142459       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 01:12:58.880950       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 01:12:59.089221       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 01:12:59.112454       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 01:12:59.127585       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 01:13:04.680228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 01:13:04.781304       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 01:13:04.789868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 01:13:04.985674       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1217 01:15:53.838282       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39300: use of closed network connection
	E1217 01:15:54.059123       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39324: use of closed network connection
	E1217 01:15:54.489127       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39358: use of closed network connection
	E1217 01:15:54.704328       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39388: use of closed network connection
	E1217 01:15:54.923486       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39408: use of closed network connection
	E1217 01:15:55.441119       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39432: use of closed network connection
	E1217 01:15:56.093135       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39488: use of closed network connection
	E1217 01:15:56.317067       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39510: use of closed network connection
	E1217 01:15:56.528817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39522: use of closed network connection
	E1217 01:15:56.738111       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39540: use of closed network connection
	E1217 01:15:56.950116       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:39556: use of closed network connection
	I1217 01:22:55.791853       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [f6a6811bfb229c5d062a8e7b033ed3de2dbb0e2ed165e1437a12f07a4faab3c2] <==
	I1217 01:13:03.977887       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 01:13:03.979032       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 01:13:03.982509       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 01:13:03.983536       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 01:13:03.985640       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 01:13:39.627062       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-202151-m02\" does not exist"
	I1217 01:13:39.681720       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-202151-m02" podCIDRs=["10.244.1.0/24"]
	I1217 01:13:43.977402       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-202151-m02"
	I1217 01:13:48.978559       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1217 01:14:55.982027       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2cxmm failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2cxmm\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1217 01:14:56.669767       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-202151-m03\" does not exist"
	I1217 01:14:56.717134       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-202151-m03" podCIDRs=["10.244.2.0/24"]
	I1217 01:14:59.048962       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-202151-m03"
	E1217 01:15:52.751252       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1217 01:16:11.762940       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-dtnw5 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-dtnw5\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1217 01:16:11.834242       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-dtnw5 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-dtnw5\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1217 01:16:12.100211       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-202151-m04\" does not exist"
	I1217 01:16:12.216544       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-202151-m04" podCIDRs=["10.244.3.0/24"]
	E1217 01:16:12.603101       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"e5de0cee-d08d-4440-b9e5-05c31e7c289c\", ResourceVersion:\"915\", Generation:1, CreationTimestamp:time.Date(2025, time.December, 17, 1, 12, 59, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\
",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\
\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001ce5b80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\
"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001da3710), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolume
ClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001da3728), EmptyDir:(*v1.EmptyDirVolumeSource)
(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portworx
VolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001da3740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Az
ureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20250512-df8de77b\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x4002890570)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarS
ource)(0x40028905a0)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.VolumeM
ount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x4002756240), Stdin:false, StdinOnce:false,
TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x40026ff158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4002787830), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(ni
l), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40028ecb90)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40026ff1a0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="Unhandled
Error"
	I1217 01:16:14.114426       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-202151-m04"
	I1217 01:16:54.768033       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-202151-m04"
	I1217 01:18:04.154076       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-202151-m04"
	I1217 01:23:04.257951       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-d62f7"
	E1217 01:23:04.446861       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [5e6cf9714dc9704e6cc5b861ed17bc38ac94c7a3dea81a72445f7571f0a8eda4] <==
	I1217 01:13:05.612508       1 server_linux.go:53] "Using iptables proxy"
	I1217 01:13:05.690109       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 01:13:05.791104       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 01:13:05.791141       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 01:13:05.791220       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 01:13:05.830457       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 01:13:05.830515       1 server_linux.go:132] "Using iptables Proxier"
	I1217 01:13:05.836894       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 01:13:05.837244       1 server.go:527] "Version info" version="v1.34.2"
	I1217 01:13:05.837267       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:13:05.838652       1 config.go:200] "Starting service config controller"
	I1217 01:13:05.838672       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 01:13:05.838688       1 config.go:106] "Starting endpoint slice config controller"
	I1217 01:13:05.838692       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 01:13:05.838703       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 01:13:05.838706       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 01:13:05.843164       1 config.go:309] "Starting node config controller"
	I1217 01:13:05.843188       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 01:13:05.843196       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 01:13:05.940270       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 01:13:05.940332       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 01:13:05.940384       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6e359320adae350b708f5960266c5ea744593223bde2ac179d7068e12682fe65] <==
	E1217 01:15:49.222170       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 2d7eee3c-7623-479e-9f77-4c536aa335bc(default/busybox-7b57f96db7-d62f7) was assumed on ha-202151-m03 but assigned to ha-202151-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-d62f7"
	E1217 01:15:49.222202       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-d62f7\": pod busybox-7b57f96db7-d62f7 is already assigned to node \"ha-202151-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-d62f7"
	I1217 01:15:49.223896       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-d62f7" node="ha-202151-m02"
	E1217 01:16:12.229387       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kqgdw\": pod kube-proxy-kqgdw is already assigned to node \"ha-202151-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kqgdw" node="ha-202151-m04"
	E1217 01:16:12.229448       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 56992588-713c-480e-8431-1d741ae5feeb(kube-system/kube-proxy-kqgdw) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-kqgdw"
	E1217 01:16:12.229470       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kqgdw\": pod kube-proxy-kqgdw is already assigned to node \"ha-202151-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-kqgdw"
	I1217 01:16:12.232771       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kqgdw" node="ha-202151-m04"
	E1217 01:16:12.273945       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cntp7\": pod kindnet-cntp7 is already assigned to node \"ha-202151-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cntp7" node="ha-202151-m04"
	E1217 01:16:12.274005       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 38bf36f5-abfa-4bc1-b2e4-7b90498614e4(kube-system/kindnet-cntp7) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cntp7"
	E1217 01:16:12.274026       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cntp7\": pod kindnet-cntp7 is already assigned to node \"ha-202151-m04\"" logger="UnhandledError" pod="kube-system/kindnet-cntp7"
	I1217 01:16:12.275639       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cntp7" node="ha-202151-m04"
	E1217 01:16:12.412558       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xpvbn\": pod kube-proxy-xpvbn is already assigned to node \"ha-202151-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xpvbn" node="ha-202151-m04"
	E1217 01:16:12.412630       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xpvbn\": pod kube-proxy-xpvbn is already assigned to node \"ha-202151-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-xpvbn"
	E1217 01:16:12.445704       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vmzcl\": pod kindnet-vmzcl is already assigned to node \"ha-202151-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vmzcl" node="ha-202151-m04"
	E1217 01:16:12.445763       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 9adf80ad-e3a9-464a-b7a1-4a2cee0a9ce0(kube-system/kindnet-vmzcl) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-vmzcl"
	E1217 01:16:12.445782       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vmzcl\": pod kindnet-vmzcl is already assigned to node \"ha-202151-m04\"" logger="UnhandledError" pod="kube-system/kindnet-vmzcl"
	E1217 01:16:12.445824       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kssg4\": pod kube-proxy-kssg4 is already assigned to node \"ha-202151-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kssg4" node="ha-202151-m04"
	E1217 01:16:12.445842       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 40bc0d25-bc99-4c0f-90cc-f8e9411a54a5(kube-system/kube-proxy-kssg4) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-kssg4"
	E1217 01:16:12.448589       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kssg4\": pod kube-proxy-kssg4 is already assigned to node \"ha-202151-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-kssg4"
	I1217 01:16:12.448641       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kssg4" node="ha-202151-m04"
	I1217 01:16:12.449006       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vmzcl" node="ha-202151-m04"
	E1217 01:16:12.477997       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6vsxj\": pod kindnet-6vsxj is already assigned to node \"ha-202151-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6vsxj" node="ha-202151-m04"
	E1217 01:16:12.478064       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4e14850c-ada5-45ef-9688-9a4e5a68a6e4(kube-system/kindnet-6vsxj) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-6vsxj"
	E1217 01:16:12.478087       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6vsxj\": pod kindnet-6vsxj is already assigned to node \"ha-202151-m04\"" logger="UnhandledError" pod="kube-system/kindnet-6vsxj"
	I1217 01:16:12.479965       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6vsxj" node="ha-202151-m04"
	
	
	==> kubelet <==
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.136526    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77v7c\" (UniqueName: \"kubernetes.io/projected/5189a0d1-4ee1-4205-99ff-4fa3ce427bbf-kube-api-access-77v7c\") pod \"kube-proxy-5gdc5\" (UID: \"5189a0d1-4ee1-4205-99ff-4fa3ce427bbf\") " pod="kube-system/kube-proxy-5gdc5"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.136746    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5189a0d1-4ee1-4205-99ff-4fa3ce427bbf-lib-modules\") pod \"kube-proxy-5gdc5\" (UID: \"5189a0d1-4ee1-4205-99ff-4fa3ce427bbf\") " pod="kube-system/kube-proxy-5gdc5"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.136820    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b89af87c-c3a2-4b6a-9ea7-93332e886e9c-xtables-lock\") pod \"kindnet-7b5wx\" (UID: \"b89af87c-c3a2-4b6a-9ea7-93332e886e9c\") " pod="kube-system/kindnet-7b5wx"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.136842    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5wp\" (UniqueName: \"kubernetes.io/projected/b89af87c-c3a2-4b6a-9ea7-93332e886e9c-kube-api-access-rj5wp\") pod \"kindnet-7b5wx\" (UID: \"b89af87c-c3a2-4b6a-9ea7-93332e886e9c\") " pod="kube-system/kindnet-7b5wx"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.136905    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5189a0d1-4ee1-4205-99ff-4fa3ce427bbf-kube-proxy\") pod \"kube-proxy-5gdc5\" (UID: \"5189a0d1-4ee1-4205-99ff-4fa3ce427bbf\") " pod="kube-system/kube-proxy-5gdc5"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.136932    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b89af87c-c3a2-4b6a-9ea7-93332e886e9c-cni-cfg\") pod \"kindnet-7b5wx\" (UID: \"b89af87c-c3a2-4b6a-9ea7-93332e886e9c\") " pod="kube-system/kindnet-7b5wx"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.136951    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b89af87c-c3a2-4b6a-9ea7-93332e886e9c-lib-modules\") pod \"kindnet-7b5wx\" (UID: \"b89af87c-c3a2-4b6a-9ea7-93332e886e9c\") " pod="kube-system/kindnet-7b5wx"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: I1217 01:13:05.275838    1361 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 17 01:13:05 ha-202151 kubelet[1361]: W1217 01:13:05.383202    1361 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio-90549251203c6bde30e61edc5a4dd27709a11aae8bc9f724e94a7e00497603af WatchSource:0}: Error finding container 90549251203c6bde30e61edc5a4dd27709a11aae8bc9f724e94a7e00497603af: Status 404 returned error can't find the container with id 90549251203c6bde30e61edc5a4dd27709a11aae8bc9f724e94a7e00497603af
	Dec 17 01:13:06 ha-202151 kubelet[1361]: I1217 01:13:06.317159    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5gdc5" podStartSLOduration=2.317133865 podStartE2EDuration="2.317133865s" podCreationTimestamp="2025-12-17 01:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 01:13:06.241057244 +0000 UTC m=+7.327614078" watchObservedRunningTime="2025-12-17 01:13:06.317133865 +0000 UTC m=+7.403690691"
	Dec 17 01:13:06 ha-202151 kubelet[1361]: I1217 01:13:06.354896    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7b5wx" podStartSLOduration=1.35487332 podStartE2EDuration="1.35487332s" podCreationTimestamp="2025-12-17 01:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 01:13:06.31820554 +0000 UTC m=+7.404762382" watchObservedRunningTime="2025-12-17 01:13:06.35487332 +0000 UTC m=+7.441430146"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: I1217 01:13:46.078045    1361 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: I1217 01:13:46.263278    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pdds\" (UniqueName: \"kubernetes.io/projected/db1e59c0-7387-4c55-b417-dd3dd6c4a2e0-kube-api-access-8pdds\") pod \"storage-provisioner\" (UID: \"db1e59c0-7387-4c55-b417-dd3dd6c4a2e0\") " pod="kube-system/storage-provisioner"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: I1217 01:13:46.263351    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2cl\" (UniqueName: \"kubernetes.io/projected/d08a25a9-a22d-4a68-acc7-99caa664092b-kube-api-access-pb2cl\") pod \"coredns-66bc5c9577-4s6qf\" (UID: \"d08a25a9-a22d-4a68-acc7-99caa664092b\") " pod="kube-system/coredns-66bc5c9577-4s6qf"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: I1217 01:13:46.263380    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/db1e59c0-7387-4c55-b417-dd3dd6c4a2e0-tmp\") pod \"storage-provisioner\" (UID: \"db1e59c0-7387-4c55-b417-dd3dd6c4a2e0\") " pod="kube-system/storage-provisioner"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: I1217 01:13:46.263407    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf3a6983-bb0f-4b64-8193-77fde64b77f6-config-volume\") pod \"coredns-66bc5c9577-km6lq\" (UID: \"bf3a6983-bb0f-4b64-8193-77fde64b77f6\") " pod="kube-system/coredns-66bc5c9577-km6lq"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: I1217 01:13:46.263432    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvlrm\" (UniqueName: \"kubernetes.io/projected/bf3a6983-bb0f-4b64-8193-77fde64b77f6-kube-api-access-cvlrm\") pod \"coredns-66bc5c9577-km6lq\" (UID: \"bf3a6983-bb0f-4b64-8193-77fde64b77f6\") " pod="kube-system/coredns-66bc5c9577-km6lq"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: I1217 01:13:46.263458    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d08a25a9-a22d-4a68-acc7-99caa664092b-config-volume\") pod \"coredns-66bc5c9577-4s6qf\" (UID: \"d08a25a9-a22d-4a68-acc7-99caa664092b\") " pod="kube-system/coredns-66bc5c9577-4s6qf"
	Dec 17 01:13:46 ha-202151 kubelet[1361]: W1217 01:13:46.475909    1361 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio-f3eeb3fb3cc203930fcb844022de3f64733d5ffe59e491610d4b34770cc3d071 WatchSource:0}: Error finding container f3eeb3fb3cc203930fcb844022de3f64733d5ffe59e491610d4b34770cc3d071: Status 404 returned error can't find the container with id f3eeb3fb3cc203930fcb844022de3f64733d5ffe59e491610d4b34770cc3d071
	Dec 17 01:13:46 ha-202151 kubelet[1361]: W1217 01:13:46.523859    1361 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio-fa00cae32b1368f4ee71ce36efdcbcfd6b9ed0e8313d99fdf304dc100fb5a027 WatchSource:0}: Error finding container fa00cae32b1368f4ee71ce36efdcbcfd6b9ed0e8313d99fdf304dc100fb5a027: Status 404 returned error can't find the container with id fa00cae32b1368f4ee71ce36efdcbcfd6b9ed0e8313d99fdf304dc100fb5a027
	Dec 17 01:13:47 ha-202151 kubelet[1361]: I1217 01:13:47.330741    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4s6qf" podStartSLOduration=42.330724515 podStartE2EDuration="42.330724515s" podCreationTimestamp="2025-12-17 01:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 01:13:47.330089267 +0000 UTC m=+48.416646101" watchObservedRunningTime="2025-12-17 01:13:47.330724515 +0000 UTC m=+48.417281340"
	Dec 17 01:13:47 ha-202151 kubelet[1361]: I1217 01:13:47.428729    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-km6lq" podStartSLOduration=42.428709523 podStartE2EDuration="42.428709523s" podCreationTimestamp="2025-12-17 01:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 01:13:47.377458119 +0000 UTC m=+48.464014953" watchObservedRunningTime="2025-12-17 01:13:47.428709523 +0000 UTC m=+48.515266348"
	Dec 17 01:13:47 ha-202151 kubelet[1361]: I1217 01:13:47.471166    1361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.471145614 podStartE2EDuration="42.471145614s" podCreationTimestamp="2025-12-17 01:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 01:13:47.431520899 +0000 UTC m=+48.518077733" watchObservedRunningTime="2025-12-17 01:13:47.471145614 +0000 UTC m=+48.557702448"
	Dec 17 01:15:49 ha-202151 kubelet[1361]: I1217 01:15:49.271249    1361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jthw\" (UniqueName: \"kubernetes.io/projected/d3764386-d2d6-4c64-89ee-996e807ed605-kube-api-access-2jthw\") pod \"busybox-7b57f96db7-hw4rm\" (UID: \"d3764386-d2d6-4c64-89ee-996e807ed605\") " pod="default/busybox-7b57f96db7-hw4rm"
	Dec 17 01:15:49 ha-202151 kubelet[1361]: W1217 01:15:49.590139    1361 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio-8e9ad7ffec46afb91809583d6e8ed02b1b1247307171b28fbc4156a48b021532 WatchSource:0}: Error finding container 8e9ad7ffec46afb91809583d6e8ed02b1b1247307171b28fbc4156a48b021532: Status 404 returned error can't find the container with id 8e9ad7ffec46afb91809583d6e8ed02b1b1247307171b28fbc4156a48b021532
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-202151 -n ha-202151
helpers_test.go:270: (dbg) Run:  kubectl --context ha-202151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (464.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (477.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 01:29:30.547880 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:07.912234 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:31:45.353956 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:33:07.479733 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:34:50.984119 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:07.911638 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-202151 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 105 (7m51.391484574s)

                                                
                                                
-- stdout --
	* [ha-202151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-202151" primary control-plane node in "ha-202151" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:28:23.957919 1225677 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:28:23.958241 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958276 1225677 out.go:374] Setting ErrFile to fd 2...
	I1217 01:28:23.958300 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958577 1225677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:28:23.958999 1225677 out.go:368] Setting JSON to false
	I1217 01:28:23.959883 1225677 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25854,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:28:23.959981 1225677 start.go:143] virtualization:  
	I1217 01:28:23.963109 1225677 out.go:179] * [ha-202151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:28:23.966861 1225677 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:28:23.967008 1225677 notify.go:221] Checking for updates...
	I1217 01:28:23.972825 1225677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:28:23.975704 1225677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:23.978560 1225677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:28:23.981565 1225677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:28:23.984558 1225677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:28:23.987973 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:23.988577 1225677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:28:24.018679 1225677 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:28:24.018817 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.078613 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.06901697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.078731 1225677 docker.go:319] overlay module found
	I1217 01:28:24.081724 1225677 out.go:179] * Using the docker driver based on existing profile
	I1217 01:28:24.084659 1225677 start.go:309] selected driver: docker
	I1217 01:28:24.084679 1225677 start.go:927] validating driver "docker" against &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.084825 1225677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:28:24.084933 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.139102 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.130176461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.139528 1225677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:28:24.139560 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:24.139616 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:24.139662 1225677 start.go:353] cluster config:
	{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.142829 1225677 out.go:179] * Starting "ha-202151" primary control-plane node in "ha-202151" cluster
	I1217 01:28:24.145513 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:24.148343 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:24.151136 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:24.151182 1225677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 01:28:24.151172 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:24.151191 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:24.151281 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:24.151292 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:24.151447 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.170893 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:24.170917 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:24.170932 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:24.170962 1225677 start.go:360] acquireMachinesLock for ha-202151: {Name:mk96d245790ddb7861f0cddd8ac09eba6d29a858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:24.171020 1225677 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-202151"
	I1217 01:28:24.171043 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:24.171052 1225677 fix.go:54] fixHost starting: 
	I1217 01:28:24.171312 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.188404 1225677 fix.go:112] recreateIfNeeded on ha-202151: state=Stopped err=<nil>
	W1217 01:28:24.188458 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:24.191811 1225677 out.go:252] * Restarting existing docker container for "ha-202151" ...
	I1217 01:28:24.191909 1225677 cli_runner.go:164] Run: docker start ha-202151
	I1217 01:28:24.438707 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.459881 1225677 kic.go:430] container "ha-202151" state is running.
	I1217 01:28:24.460741 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:24.487033 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.487599 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:24.487676 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:24.511372 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:24.513726 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:24.513748 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:24.516008 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:28:27.648958 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.648981 1225677 ubuntu.go:182] provisioning hostname "ha-202151"
	I1217 01:28:27.649043 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.671053 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.671376 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.671387 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151 && echo "ha-202151" | sudo tee /etc/hostname
	I1217 01:28:27.816001 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.816128 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.833557 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.833865 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.833885 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:27.968607 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:27.968638 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:27.968669 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:27.968686 1225677 provision.go:84] configureAuth start
	I1217 01:28:27.968751 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:27.986183 1225677 provision.go:143] copyHostCerts
	I1217 01:28:27.986244 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986288 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:27.986301 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986379 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:27.986471 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986493 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:27.986502 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986530 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:27.986576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986601 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:27.986609 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986637 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:27.986687 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151 san=[127.0.0.1 192.168.49.2 ha-202151 localhost minikube]
	I1217 01:28:28.161966 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:28.162074 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:28.162136 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.180162 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.276314 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:28.276374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:28.294399 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:28.294463 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1217 01:28:28.312546 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:28.312611 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:28.329872 1225677 provision.go:87] duration metric: took 361.168151ms to configureAuth
	I1217 01:28:28.329900 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:28.330141 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:28.330260 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.347687 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:28.348017 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:28.348037 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:28.719002 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:28.719025 1225677 machine.go:97] duration metric: took 4.231409969s to provisionDockerMachine
	I1217 01:28:28.719036 1225677 start.go:293] postStartSetup for "ha-202151" (driver="docker")
	I1217 01:28:28.719047 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:28.719106 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:28.719158 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.741197 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.836254 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:28.839569 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:28.839599 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:28.839611 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:28.839667 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:28.839747 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:28.839758 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:28.839856 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:28.847310 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:28.864518 1225677 start.go:296] duration metric: took 145.466453ms for postStartSetup
	I1217 01:28:28.864667 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:28.864709 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.882572 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.974073 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:28.979262 1225677 fix.go:56] duration metric: took 4.808204011s for fixHost
	I1217 01:28:28.979289 1225677 start.go:83] releasing machines lock for "ha-202151", held for 4.808256014s
	I1217 01:28:28.979366 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:29.000545 1225677 ssh_runner.go:195] Run: cat /version.json
	I1217 01:28:29.000593 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:29.000605 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.000678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.017863 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.030045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.205586 1225677 ssh_runner.go:195] Run: systemctl --version
	I1217 01:28:29.212211 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:29.247878 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:29.252247 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:29.252372 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:29.260987 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:29.261012 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:29.261044 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:29.261091 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:29.276500 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:29.289977 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:29.290113 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:29.306150 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:29.319359 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:29.442260 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:29.554130 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:29.554229 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:29.569409 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:29.582225 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:29.693269 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:29.815821 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:29.829762 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:29.843587 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:29.843675 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.852929 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:29.853026 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.862094 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.870988 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.879860 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:29.888714 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.897427 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.906242 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.915392 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:29.923247 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:29.930867 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.085763 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:28:30.268466 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:28:30.268540 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:28:30.272645 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:28:30.272717 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:28:30.276359 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:28:30.302094 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:28:30.302194 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.329875 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.364988 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:28:30.367851 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:28:30.383155 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:28:30.387105 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.397488 1225677 kubeadm.go:884] updating cluster {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:28:30.397642 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:30.397701 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.434465 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.434490 1225677 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:28:30.434546 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.461597 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.461622 1225677 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:28:30.461631 1225677 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 01:28:30.461733 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:28:30.461815 1225677 ssh_runner.go:195] Run: crio config
	I1217 01:28:30.524993 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:30.525016 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:30.525041 1225677 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:28:30.525063 1225677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-202151 NodeName:ha-202151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:28:30.525197 1225677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-202151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:28:30.525219 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:28:30.525269 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:28:30.537247 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:30.537359 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:28:30.537423 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:28:30.545256 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:28:30.545330 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 01:28:30.553189 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 01:28:30.566160 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:28:30.579061 1225677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 01:28:30.591667 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:28:30.604079 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:28:30.607859 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.617660 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.737827 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:28:30.755642 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.2
	I1217 01:28:30.755663 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:28:30.755694 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:30.755839 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:28:30.755906 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:28:30.755919 1225677 certs.go:257] generating profile certs ...
	I1217 01:28:30.755998 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:28:30.756031 1225677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698
	I1217 01:28:30.756050 1225677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1217 01:28:31.070955 1225677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 ...
	I1217 01:28:31.071062 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698: {Name:mke1b333e19e123d757f2361ffab64b3ce630ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071323 1225677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 ...
	I1217 01:28:31.071369 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698: {Name:mk12d8ef8dbb1ef8ff84c5ba8c83b430a9515230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071553 1225677 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:28:31.071777 1225677 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:28:31.071982 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:28:31.072020 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:28:31.072053 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:28:31.072099 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:28:31.072142 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:28:31.072179 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:28:31.072222 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:28:31.072260 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:28:31.072291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:28:31.072379 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:28:31.072496 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:28:31.072540 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:28:31.072623 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:28:31.072699 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:28:31.072755 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:28:31.072888 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:31.072995 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.073038 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.073074 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.073717 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:28:31.098054 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:28:31.121354 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:28:31.140746 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:28:31.159713 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:28:31.178284 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:28:31.196338 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:28:31.214382 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:28:31.231910 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:28:31.249283 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:28:31.267150 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:28:31.284464 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:28:31.297370 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:28:31.303511 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.310796 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:28:31.318435 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322279 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322380 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.363578 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:28:31.371139 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.378596 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:28:31.385983 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389802 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389911 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.449546 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:28:31.463605 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.474127 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:28:31.484475 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489596 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489713 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.551435 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:28:31.559450 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:28:31.573170 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:28:31.639157 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:28:31.715122 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:28:31.783477 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:28:31.844822 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:28:31.905215 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:28:31.967945 1225677 kubeadm.go:401] StartCluster: {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:31.968163 1225677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:28:31.968241 1225677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:28:32.018626 1225677 cri.go:89] found id: "9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c"
	I1217 01:28:32.018691 1225677 cri.go:89] found id: "b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43"
	I1217 01:28:32.018711 1225677 cri.go:89] found id: "d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e"
	I1217 01:28:32.018735 1225677 cri.go:89] found id: "f70584959dd02aedc5247d28de369b3dfbec762797364a5b46746119bcd380ba"
	I1217 01:28:32.018753 1225677 cri.go:89] found id: "82cc4882889dc4d930d89f36ac77114d0161f4172216bc47431b8697c0630be5"
	I1217 01:28:32.018781 1225677 cri.go:89] found id: ""
	I1217 01:28:32.018853 1225677 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 01:28:32.044061 1225677 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T01:28:32Z" level=error msg="open /run/runc: no such file or directory"
	I1217 01:28:32.044185 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:28:32.052950 1225677 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:28:32.053010 1225677 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:28:32.053080 1225677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:28:32.061188 1225677 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:32.061654 1225677 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-202151" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.061797 1225677 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-202151" cluster setting kubeconfig missing "ha-202151" context setting]
	I1217 01:28:32.062106 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.062698 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:28:32.063465 1225677 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:28:32.063546 1225677 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:28:32.063583 1225677 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:28:32.063613 1225677 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:28:32.063651 1225677 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:28:32.063976 1225677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:28:32.063525 1225677 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 01:28:32.081817 1225677 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 01:28:32.081837 1225677 kubeadm.go:602] duration metric: took 28.80443ms to restartPrimaryControlPlane
	I1217 01:28:32.081846 1225677 kubeadm.go:403] duration metric: took 113.913079ms to StartCluster
	I1217 01:28:32.081861 1225677 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.081919 1225677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.082486 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.082669 1225677 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:28:32.082688 1225677 start.go:242] waiting for startup goroutines ...
	I1217 01:28:32.082706 1225677 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:28:32.083152 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.086942 1225677 out.go:179] * Enabled addons: 
	I1217 01:28:32.089944 1225677 addons.go:530] duration metric: took 7.236595ms for enable addons: enabled=[]
	I1217 01:28:32.089983 1225677 start.go:247] waiting for cluster config update ...
	I1217 01:28:32.089992 1225677 start.go:256] writing updated cluster config ...
	I1217 01:28:32.093327 1225677 out.go:203] 
	I1217 01:28:32.096604 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.096790 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.100238 1225677 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	I1217 01:28:32.103257 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:32.106243 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:32.109227 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:32.109291 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:32.109420 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:32.109454 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:32.109592 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.109854 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:32.139073 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:32.139092 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:32.139106 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:32.139130 1225677 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:32.139181 1225677 start.go:364] duration metric: took 36.692µs to acquireMachinesLock for "ha-202151-m02"
	I1217 01:28:32.139199 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:32.139204 1225677 fix.go:54] fixHost starting: m02
	I1217 01:28:32.139463 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.170663 1225677 fix.go:112] recreateIfNeeded on ha-202151-m02: state=Stopped err=<nil>
	W1217 01:28:32.170689 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:32.173829 1225677 out.go:252] * Restarting existing docker container for "ha-202151-m02" ...
	I1217 01:28:32.173910 1225677 cli_runner.go:164] Run: docker start ha-202151-m02
	I1217 01:28:32.543486 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.572710 1225677 kic.go:430] container "ha-202151-m02" state is running.
	I1217 01:28:32.573066 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:32.602951 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.603208 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:32.603266 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:32.629641 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:32.629950 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:32.629959 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:32.630596 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37710->127.0.0.1:33963: read: connection reset by peer
	I1217 01:28:35.808896 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:35.808924 1225677 ubuntu.go:182] provisioning hostname "ha-202151-m02"
	I1217 01:28:35.808996 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:35.842137 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:35.842447 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:35.842466 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
	I1217 01:28:36.038050 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:36.038178 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.082250 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:36.082569 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:36.082593 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:36.332805 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:36.332901 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:36.332944 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:36.332991 1225677 provision.go:84] configureAuth start
	I1217 01:28:36.333104 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:36.366101 1225677 provision.go:143] copyHostCerts
	I1217 01:28:36.366154 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366188 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:36.366198 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366291 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:36.366454 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366479 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:36.366484 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366514 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:36.366576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366600 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:36.366604 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366636 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:36.366685 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
	I1217 01:28:36.714448 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:36.714609 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:36.714700 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.737234 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:36.864039 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:36.864124 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:36.913291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:36.913360 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:28:36.977060 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:36.977210 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:37.077043 1225677 provision.go:87] duration metric: took 744.017822ms to configureAuth
	I1217 01:28:37.077119 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:37.077458 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:37.077641 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:37.114203 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:37.114614 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:37.114630 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:38.749167 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:38.749190 1225677 machine.go:97] duration metric: took 6.145972988s to provisionDockerMachine
	I1217 01:28:38.749202 1225677 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
	I1217 01:28:38.749218 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:38.749280 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:38.749320 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.798164 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:38.934750 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:38.938751 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:38.938784 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:38.938805 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:38.938890 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:38.939022 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:38.939035 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:38.939161 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:38.949374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:38.977662 1225677 start.go:296] duration metric: took 228.444359ms for postStartSetup
	I1217 01:28:38.977768 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:38.977833 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.997045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.094589 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:39.100157 1225677 fix.go:56] duration metric: took 6.9609442s for fixHost
	I1217 01:28:39.100185 1225677 start.go:83] releasing machines lock for "ha-202151-m02", held for 6.960996095s
	I1217 01:28:39.100277 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:39.121509 1225677 out.go:179] * Found network options:
	I1217 01:28:39.124537 1225677 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 01:28:39.127500 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:28:39.127546 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 01:28:39.127633 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:39.127678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.127731 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:39.127813 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.159911 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.160356 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.389362 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:39.518196 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:39.518280 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:39.530690 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:39.530730 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:39.530766 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:39.530828 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:39.559452 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:39.590703 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:39.590778 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:39.623053 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:39.646277 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:39.924657 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:40.211696 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:40.211818 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:40.234789 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:40.255311 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:40.483522 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:40.697787 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:40.728627 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:40.773025 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:40.773101 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.810962 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:40.811053 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.830095 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.843899 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.859512 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:40.875469 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.891423 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.906705 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.920139 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:40.935324 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:40.949872 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:41.265195 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:11.765812 1225677 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.500580562s)
	I1217 01:30:11.765836 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:11.765895 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:11.773685 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:30:11.773748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:30:11.777914 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:30:11.832219 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:30:11.832561 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.883307 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.931713 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:30:11.934749 1225677 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 01:30:11.937773 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:30:11.958180 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:11.963975 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:11.980941 1225677 mustload.go:66] Loading cluster: ha-202151
	I1217 01:30:11.981196 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:11.981523 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:30:12.010212 1225677 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:30:12.010538 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
	I1217 01:30:12.010547 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:30:12.010562 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:12.010679 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:30:12.010721 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:30:12.010729 1225677 certs.go:257] generating profile certs ...
	I1217 01:30:12.010806 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:30:12.010871 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730
	I1217 01:30:12.010909 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:30:12.010918 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:30:12.010930 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:30:12.010942 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:30:12.010952 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:30:12.010963 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:30:12.010976 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:30:12.010988 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:30:12.010998 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:30:12.011046 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:30:12.011099 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:12.011108 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:30:12.011142 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:30:12.011167 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:12.011226 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:30:12.011276 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:30:12.011308 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.011330 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.011341 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.011405 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:30:12.040530 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:30:12.140835 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:30:12.145679 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:30:12.155103 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:30:12.158946 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:30:12.168468 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:30:12.172730 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:30:12.182622 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:30:12.186892 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:30:12.196428 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:30:12.200769 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:30:12.210174 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:30:12.214229 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:30:12.223408 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:12.242760 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:12.263233 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:12.281118 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:12.299303 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:30:12.317115 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:12.334779 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:12.352592 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:30:12.370481 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:30:12.389095 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:30:12.412594 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:12.449315 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:30:12.473400 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:30:12.494693 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:30:12.517806 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:30:12.543454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:30:12.563454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:30:12.583785 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:30:12.603782 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:30:12.611317 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.622461 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:30:12.631322 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635830 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635962 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.683099 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:12.692252 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.701723 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:12.714594 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719579 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719716 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.763558 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:12.772848 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.782803 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:30:12.792174 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.797950 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.798068 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.843461 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:12.852350 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:12.856738 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:12.902677 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:12.948658 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:12.994789 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:13.042684 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:13.096054 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:13.158401 1225677 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1217 01:30:13.158570 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:13.158615 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:30:13.158706 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:30:13.173582 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:30:13.173705 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:30:13.173834 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:13.183901 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:13.184021 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:30:13.192889 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:30:13.208806 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:13.224983 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:30:13.240987 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:13.245030 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:13.255387 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.401843 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.417093 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:13.416720 1225677 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:30:13.423303 1225677 out.go:179] * Verifying Kubernetes components...
	I1217 01:30:13.426149 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.647974 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.667990 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:30:13.668105 1225677 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:30:13.668438 1225677 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201323 1225677 node_ready.go:49] node "ha-202151-m02" is "Ready"
	I1217 01:30:14.201352 1225677 node_ready.go:38] duration metric: took 532.861298ms for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201366 1225677 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:30:14.201430 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:14.702397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.202165 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.701679 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.202436 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.701593 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.202167 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.702134 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.201871 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.202178 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.702421 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.201608 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.701963 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.201849 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.702468 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.201659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.702284 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.202447 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.701767 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.201870 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.701725 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.202161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.701566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.201668 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.702034 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.202090 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.201787 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.701530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.202044 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.702049 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.202554 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.201868 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.702179 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.202396 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.702380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.701675 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.201765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.701936 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.201563 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.701569 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.202228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.702471 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.201812 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.701808 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.201588 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.701513 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.202142 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.701610 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.201867 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.702427 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.202172 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.202404 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.701704 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.201454 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.702205 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.201850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.702118 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.201665 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.702497 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.201634 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.701590 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.202217 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.202252 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.701540 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.702332 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.202380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.701545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.202215 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.701654 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.202277 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.701599 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.202236 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.702370 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.201552 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.702331 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.201545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.202549 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.202225 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.701571 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.202016 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.702392 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.212791 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.701639 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.202292 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.701781 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.201523 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.701618 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.201666 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.702192 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.202218 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.701749 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.201582 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.701583 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.201568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.702305 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.202030 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.702244 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.201601 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.702328 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.202314 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.701594 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.202413 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.701574 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.201566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.702440 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.701568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.202474 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.701537 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:13.701628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:13.737091 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:13.737114 1225677 cri.go:89] found id: ""
	I1217 01:31:13.737124 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:13.737180 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.741133 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:13.741205 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:13.767828 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:13.767849 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:13.767854 1225677 cri.go:89] found id: ""
	I1217 01:31:13.767861 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:13.767916 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.772125 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.775836 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:13.775913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:13.807345 1225677 cri.go:89] found id: ""
	I1217 01:31:13.807369 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.807377 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:13.807384 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:13.807444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:13.838797 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:13.838817 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:13.838821 1225677 cri.go:89] found id: ""
	I1217 01:31:13.838829 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:13.838887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.843081 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.846896 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:13.846969 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:13.886939 1225677 cri.go:89] found id: ""
	I1217 01:31:13.886968 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.886977 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:13.886983 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:13.887045 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:13.927324 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:13.927350 1225677 cri.go:89] found id: ""
	I1217 01:31:13.927359 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:13.927418 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.932191 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:13.932281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:13.963576 1225677 cri.go:89] found id: ""
	I1217 01:31:13.963605 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.963614 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:13.963623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:13.963636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:14.061267 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:14.061313 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:14.083208 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:14.083318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:14.113297 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:14.113328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:14.168503 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:14.168540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:14.225258 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:14.225299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:14.254658 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:14.254688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:14.329954 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:14.329994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:14.363830 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:14.363859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:14.780185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:14.780213 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:14.780229 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:14.821746 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:14.821787 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.348276 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:17.359506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:17.359576 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:17.385494 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.385522 1225677 cri.go:89] found id: ""
	I1217 01:31:17.385531 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:17.385587 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.389291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:17.389381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:17.417467 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:17.417488 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:17.417493 1225677 cri.go:89] found id: ""
	I1217 01:31:17.417501 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:17.417557 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.421553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.425305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:17.425381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:17.452893 1225677 cri.go:89] found id: ""
	I1217 01:31:17.452925 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.452935 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:17.452945 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:17.453003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:17.479708 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.479730 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.479736 1225677 cri.go:89] found id: ""
	I1217 01:31:17.479743 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:17.479799 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.484009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.487543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:17.487617 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:17.522723 1225677 cri.go:89] found id: ""
	I1217 01:31:17.522751 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.522760 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:17.522767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:17.522829 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:17.550998 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.551023 1225677 cri.go:89] found id: ""
	I1217 01:31:17.551032 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:17.551086 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.554682 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:17.554767 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:17.587610 1225677 cri.go:89] found id: ""
	I1217 01:31:17.587650 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.587659 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:17.587684 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:17.587709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.616971 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:17.617002 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:17.692991 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:17.693034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:17.741052 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:17.741081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:17.761199 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:17.761228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.792936 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:17.793007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.845716 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:17.845753 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.881065 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:17.881096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:17.982043 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:17.982082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:18.070492 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:18.070517 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:18.070531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:18.117818 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:18.117911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.668542 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:20.679148 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:20.679242 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:20.706664 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:20.706687 1225677 cri.go:89] found id: ""
	I1217 01:31:20.706697 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:20.706757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.711072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:20.711147 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:20.737754 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:20.737779 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.737784 1225677 cri.go:89] found id: ""
	I1217 01:31:20.737792 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:20.737847 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.741755 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.745506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:20.745577 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:20.778364 1225677 cri.go:89] found id: ""
	I1217 01:31:20.778386 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.778394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:20.778400 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:20.778458 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:20.807237 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.807262 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:20.807267 1225677 cri.go:89] found id: ""
	I1217 01:31:20.807275 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:20.807361 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.811689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.815755 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:20.815857 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:20.842433 1225677 cri.go:89] found id: ""
	I1217 01:31:20.842454 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.842464 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:20.842470 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:20.842526 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:20.869792 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:20.869821 1225677 cri.go:89] found id: ""
	I1217 01:31:20.869831 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:20.869887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.873765 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:20.873847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:20.900911 1225677 cri.go:89] found id: ""
	I1217 01:31:20.900940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.900952 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:20.900961 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:20.900974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.954883 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:20.954920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:21.002822 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:21.002852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:21.108368 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:21.108406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:21.135557 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:21.135588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:21.176576 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:21.176610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:21.205927 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:21.205961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:21.232870 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:21.232897 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:21.312344 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:21.312377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:21.333806 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:21.333836 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:21.415860 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:21.415895 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:21.415909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:23.961577 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:23.974520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:23.974616 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:24.008513 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.008538 1225677 cri.go:89] found id: ""
	I1217 01:31:24.008548 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:24.008627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.013203 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:24.013311 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:24.041344 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.041369 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.041374 1225677 cri.go:89] found id: ""
	I1217 01:31:24.041383 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:24.041499 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.045778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.049690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:24.049764 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:24.076869 1225677 cri.go:89] found id: ""
	I1217 01:31:24.076902 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.076912 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:24.076919 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:24.076982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:24.115429 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.115504 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.115535 1225677 cri.go:89] found id: ""
	I1217 01:31:24.115571 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:24.115649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.121035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.126165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:24.126286 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:24.153228 1225677 cri.go:89] found id: ""
	I1217 01:31:24.153253 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.153262 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:24.153268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:24.153326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:24.196715 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:24.196801 1225677 cri.go:89] found id: ""
	I1217 01:31:24.196825 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:24.196912 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.201554 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:24.201642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:24.230189 1225677 cri.go:89] found id: ""
	I1217 01:31:24.230214 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.230223 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:24.230232 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:24.230244 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:24.308144 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:24.308188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:24.326634 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:24.326664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:24.400916 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:24.400938 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:24.400952 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.448701 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:24.448743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.482276 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:24.482309 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:24.515534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:24.515567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:24.625661 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:24.625708 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.652399 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:24.652439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.693518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:24.693556 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.750020 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:24.750059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.278748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:27.290609 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:27.290689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:27.316966 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.316991 1225677 cri.go:89] found id: ""
	I1217 01:31:27.316999 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:27.317054 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.320866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:27.320938 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:27.347398 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.347422 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.347427 1225677 cri.go:89] found id: ""
	I1217 01:31:27.347436 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:27.347496 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.351488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.355369 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:27.355442 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:27.381534 1225677 cri.go:89] found id: ""
	I1217 01:31:27.381564 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.381574 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:27.381580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:27.381662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:27.410739 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.410810 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.410822 1225677 cri.go:89] found id: ""
	I1217 01:31:27.410831 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:27.410892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.415095 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.419246 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:27.419364 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:27.447586 1225677 cri.go:89] found id: ""
	I1217 01:31:27.447612 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.447622 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:27.447629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:27.447693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:27.474916 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.474941 1225677 cri.go:89] found id: ""
	I1217 01:31:27.474950 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:27.475035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.479118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:27.479203 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:27.506051 1225677 cri.go:89] found id: ""
	I1217 01:31:27.506078 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.506087 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:27.506097 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:27.506108 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:27.545535 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:27.545568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:27.641749 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:27.641830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:27.661191 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:27.661226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:27.738097 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:27.738120 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:27.738134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.782011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:27.782048 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.834514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:27.834550 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.905140 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:27.905177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.940830 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:27.940862 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.969106 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:27.969136 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.998807 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:27.998835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:30.578811 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:30.590365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:30.590444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:30.618562 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:30.618585 1225677 cri.go:89] found id: ""
	I1217 01:31:30.618594 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:30.618677 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.623874 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:30.624003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:30.654712 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:30.654734 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.654740 1225677 cri.go:89] found id: ""
	I1217 01:31:30.654747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:30.654831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.658663 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.662256 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:30.662333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:30.690956 1225677 cri.go:89] found id: ""
	I1217 01:31:30.690983 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.691000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:30.691008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:30.691073 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:30.720079 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.720104 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.720110 1225677 cri.go:89] found id: ""
	I1217 01:31:30.720118 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:30.720190 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.724290 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.728443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:30.728569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:30.762597 1225677 cri.go:89] found id: ""
	I1217 01:31:30.762665 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.762683 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:30.762690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:30.762769 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:30.793999 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:30.794022 1225677 cri.go:89] found id: ""
	I1217 01:31:30.794031 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:30.794087 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.798031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:30.798111 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:30.825811 1225677 cri.go:89] found id: ""
	I1217 01:31:30.825838 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.825848 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:30.825858 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:30.825900 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.874308 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:30.874349 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.932548 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:30.932596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.973410 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:30.973440 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:31.061854 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:31.061893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:31.081279 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:31.081308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:31.173788 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:31.173816 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:31.173832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:31.203476 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:31.203507 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:31.242819 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:31.242857 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:31.270107 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:31.270137 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:31.301308 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:31.301338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:33.901065 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:33.913301 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:33.913455 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:33.945005 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:33.945033 1225677 cri.go:89] found id: ""
	I1217 01:31:33.945042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:33.945100 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.949030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:33.949099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:33.980996 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:33.981019 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:33.981024 1225677 cri.go:89] found id: ""
	I1217 01:31:33.981032 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:33.981090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.985533 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.989328 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:33.989424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:34.020066 1225677 cri.go:89] found id: ""
	I1217 01:31:34.020105 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.020115 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:34.020123 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:34.020214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:34.054526 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.054551 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.054558 1225677 cri.go:89] found id: ""
	I1217 01:31:34.054566 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:34.054628 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.058716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.062466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:34.062539 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:34.100752 1225677 cri.go:89] found id: ""
	I1217 01:31:34.100777 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.100787 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:34.100794 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:34.100856 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:34.133409 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.133431 1225677 cri.go:89] found id: ""
	I1217 01:31:34.133440 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:34.133498 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.137315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:34.137386 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:34.169015 1225677 cri.go:89] found id: ""
	I1217 01:31:34.169048 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.169058 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:34.169068 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:34.169081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:34.230112 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:34.230152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:34.275030 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:34.275071 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.303312 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:34.303341 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:34.323613 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:34.323791 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.377596 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:34.377632 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.405931 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:34.405961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:34.485309 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:34.485348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:34.537697 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:34.537780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:34.640362 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:34.640409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:34.719202 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:34.719227 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:34.719241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.248692 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:37.259883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:37.259952 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:37.288047 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.288071 1225677 cri.go:89] found id: ""
	I1217 01:31:37.288092 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:37.288147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.291723 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:37.291791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:37.320405 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.320468 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:37.320473 1225677 cri.go:89] found id: ""
	I1217 01:31:37.320481 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:37.320536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.324331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.327725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:37.327795 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:37.353914 1225677 cri.go:89] found id: ""
	I1217 01:31:37.353940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.353949 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:37.353956 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:37.354033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:37.380050 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.380082 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:37.380088 1225677 cri.go:89] found id: ""
	I1217 01:31:37.380097 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:37.380169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.384466 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.388616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:37.388737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:37.434167 1225677 cri.go:89] found id: ""
	I1217 01:31:37.434203 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.434213 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:37.434235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:37.434327 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:37.463397 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.463418 1225677 cri.go:89] found id: ""
	I1217 01:31:37.463426 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:37.463501 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.467357 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:37.467429 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:37.496476 1225677 cri.go:89] found id: ""
	I1217 01:31:37.496504 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.496514 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:37.496523 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:37.496534 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:37.580269 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:37.580312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:37.598989 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:37.599020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:37.669887 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:37.669956 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:37.669985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.696910 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:37.696934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.741514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:37.741546 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.797620 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:37.797657 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.827250 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:37.827277 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:37.860098 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:37.860127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:37.981956 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:37.982003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:38.045819 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:38.045855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.580761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:40.592635 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:40.592708 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:40.620832 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:40.620856 1225677 cri.go:89] found id: ""
	I1217 01:31:40.620866 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:40.620942 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.624827 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:40.624914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:40.662358 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.662381 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.662386 1225677 cri.go:89] found id: ""
	I1217 01:31:40.662394 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:40.662452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.666347 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.669969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:40.670068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:40.698897 1225677 cri.go:89] found id: ""
	I1217 01:31:40.698922 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.698931 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:40.698938 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:40.699026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:40.726184 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.726254 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.726265 1225677 cri.go:89] found id: ""
	I1217 01:31:40.726273 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:40.726331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.730221 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.734070 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:40.734150 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:40.760090 1225677 cri.go:89] found id: ""
	I1217 01:31:40.760116 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.760125 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:40.760185 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:40.760251 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:40.790670 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:40.790693 1225677 cri.go:89] found id: ""
	I1217 01:31:40.790702 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:40.790754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.794861 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:40.794936 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:40.826103 1225677 cri.go:89] found id: ""
	I1217 01:31:40.826129 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.826138 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:40.826147 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:40.826160 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.878987 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:40.879066 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.924714 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:40.924751 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.980944 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:40.980981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:41.072994 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:41.073031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:41.105014 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:41.105042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:41.212780 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:41.212818 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:41.241014 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:41.241042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:41.277652 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:41.277684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:41.308943 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:41.308972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:41.328092 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:41.328123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:41.410133 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:43.911410 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:43.924272 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:43.924351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:43.953227 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:43.953252 1225677 cri.go:89] found id: ""
	I1217 01:31:43.953261 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:43.953337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.957558 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:43.957674 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:43.984394 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:43.984493 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:43.984513 1225677 cri.go:89] found id: ""
	I1217 01:31:43.984547 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:43.984626 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.988727 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.992395 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:43.992531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:44.023165 1225677 cri.go:89] found id: ""
	I1217 01:31:44.023242 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.023265 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:44.023285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:44.023376 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:44.056175 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.056249 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.056268 1225677 cri.go:89] found id: ""
	I1217 01:31:44.056293 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:44.056373 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.060006 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.063548 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:44.063623 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:44.091849 1225677 cri.go:89] found id: ""
	I1217 01:31:44.091875 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.091886 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:44.091892 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:44.091950 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:44.125771 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.125837 1225677 cri.go:89] found id: ""
	I1217 01:31:44.125861 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:44.125938 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.129707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:44.129781 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:44.157267 1225677 cri.go:89] found id: ""
	I1217 01:31:44.157343 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.157359 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:44.157369 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:44.157380 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:44.179921 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:44.180042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:44.227426 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:44.227495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:44.268056 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:44.268089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:44.312908 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:44.312943 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.344639 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:44.344673 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.370623 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:44.370650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:44.400984 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:44.401017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:44.494253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:44.494291 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:44.563778 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:44.563859 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:44.563887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.630776 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:44.630812 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.217775 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:47.228858 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:47.228999 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:47.258264 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.258287 1225677 cri.go:89] found id: ""
	I1217 01:31:47.258305 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:47.258366 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.262265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:47.262366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:47.293485 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.293508 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.293552 1225677 cri.go:89] found id: ""
	I1217 01:31:47.293562 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:47.293623 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.297395 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.300792 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:47.300866 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:47.329792 1225677 cri.go:89] found id: ""
	I1217 01:31:47.329818 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.329827 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:47.329833 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:47.329890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:47.356681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.356747 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:47.356758 1225677 cri.go:89] found id: ""
	I1217 01:31:47.356767 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:47.356839 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.360948 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.364494 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:47.364598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:47.390993 1225677 cri.go:89] found id: ""
	I1217 01:31:47.391021 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.391031 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:47.391037 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:47.391099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:47.417453 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.417517 1225677 cri.go:89] found id: ""
	I1217 01:31:47.417541 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:47.417618 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.421365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:47.421437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:47.447227 1225677 cri.go:89] found id: ""
	I1217 01:31:47.447254 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.447264 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:47.447273 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:47.447285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.474445 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:47.474475 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:47.546929 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:47.546947 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:47.546962 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.621943 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:47.621985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:47.653654 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:47.653679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:47.751509 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:47.751548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:47.773290 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:47.773323 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.802347 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:47.802378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.849646 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:47.849680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.894275 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:47.894315 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.949242 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:47.949281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.480769 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:50.491711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:50.491827 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:50.519320 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.519345 1225677 cri.go:89] found id: ""
	I1217 01:31:50.519353 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:50.519440 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.523424 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:50.523533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:50.551627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:50.551652 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:50.551658 1225677 cri.go:89] found id: ""
	I1217 01:31:50.551665 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:50.551751 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.555585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.559244 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:50.559347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:50.586218 1225677 cri.go:89] found id: ""
	I1217 01:31:50.586241 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.586249 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:50.586255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:50.586333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:50.618629 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.618661 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.618667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.618675 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:50.618776 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.622850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.626687 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:50.626824 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:50.659667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.659703 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.659713 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:50.659738 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:50.659817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:50.686997 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.687069 1225677 cri.go:89] found id: ""
	I1217 01:31:50.687092 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:50.687160 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.690709 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:50.690823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:50.721432 1225677 cri.go:89] found id: ""
	I1217 01:31:50.721509 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.721534 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:50.721553 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:50.721583 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.748223 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:50.748250 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.807290 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:50.807328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.835575 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:50.835603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.861513 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:50.861539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:50.937079 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:50.937118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:51.023701 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:51.023722 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:51.023736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:51.063322 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:51.063360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:51.134936 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:51.134983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:51.172581 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:51.172611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:51.279920 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:51.279958 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:53.800293 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:53.813493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:53.813572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:53.855699 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:53.855727 1225677 cri.go:89] found id: ""
	I1217 01:31:53.855737 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:53.855790 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.860842 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:53.860915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:53.905688 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:53.905715 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:53.905720 1225677 cri.go:89] found id: ""
	I1217 01:31:53.905727 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:53.905796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.911027 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.916033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:53.916105 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:53.971312 1225677 cri.go:89] found id: ""
	I1217 01:31:53.971339 1225677 logs.go:282] 0 containers: []
	W1217 01:31:53.971349 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:53.971356 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:53.971477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:54.021427 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.021456 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:54.021474 1225677 cri.go:89] found id: ""
	I1217 01:31:54.021488 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:54.021585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.030798 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.035177 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:54.035371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:54.113099 1225677 cri.go:89] found id: ""
	I1217 01:31:54.113124 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.113133 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:54.113139 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:54.113246 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:54.166627 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.166651 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.166658 1225677 cri.go:89] found id: ""
	I1217 01:31:54.166665 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:54.166783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.171754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.182182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:54.182283 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:54.234503 1225677 cri.go:89] found id: ""
	I1217 01:31:54.234567 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.234591 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:54.234615 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:54.234642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.275461 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:54.275532 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:54.366758 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:54.366801 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:54.403474 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:54.403513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:54.422090 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:54.422131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:54.486461 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:54.486497 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.553429 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:54.553466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.599563 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:54.599593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:54.706755 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:54.706795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:54.812798 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:54.812822 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:54.812835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:54.838401 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:54.838433 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:54.893784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:54.893823 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.427168 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:57.438551 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:57.438655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:57.468636 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:57.468660 1225677 cri.go:89] found id: ""
	I1217 01:31:57.468669 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:57.468726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.472745 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:57.472819 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:57.500682 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:57.500702 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.500707 1225677 cri.go:89] found id: ""
	I1217 01:31:57.500714 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:57.500777 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.504719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.508458 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:57.508557 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:57.540789 1225677 cri.go:89] found id: ""
	I1217 01:31:57.540813 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.540822 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:57.540828 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:57.540889 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:57.570366 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.570392 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.570398 1225677 cri.go:89] found id: ""
	I1217 01:31:57.570406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:57.570462 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.574531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.578702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:57.578782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:57.608017 1225677 cri.go:89] found id: ""
	I1217 01:31:57.608042 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.608051 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:57.608058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:57.608122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:57.634195 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:57.634218 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.634224 1225677 cri.go:89] found id: ""
	I1217 01:31:57.634232 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:57.634317 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.638339 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.642068 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:57.642166 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:57.669214 1225677 cri.go:89] found id: ""
	I1217 01:31:57.669250 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.669259 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:57.669268 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:57.669284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.733958 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:57.733991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.790688 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:57.790731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.825378 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:57.825409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:57.903425 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:57.903465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:57.977243 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:57.977266 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:57.977280 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:58.008228 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:58.008262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:58.044832 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:58.044861 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:58.076961 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:58.077009 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:58.174022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:58.174061 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:58.194526 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:58.194561 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:58.225629 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:58.225658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.768659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:00.779781 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:00.779855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:00.809961 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:00.809984 1225677 cri.go:89] found id: ""
	I1217 01:32:00.809993 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:00.810055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.814113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:00.814232 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:00.842110 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.842179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:00.842193 1225677 cri.go:89] found id: ""
	I1217 01:32:00.842202 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:00.842259 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.846284 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.850463 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:00.850535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:00.877321 1225677 cri.go:89] found id: ""
	I1217 01:32:00.877347 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.877357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:00.877364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:00.877424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:00.903950 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:00.904025 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:00.904044 1225677 cri.go:89] found id: ""
	I1217 01:32:00.904065 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:00.904183 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.907995 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.911685 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:00.911762 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:00.940826 1225677 cri.go:89] found id: ""
	I1217 01:32:00.940856 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.940865 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:00.940871 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:00.940931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:00.967056 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:00.967077 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:00.967088 1225677 cri.go:89] found id: ""
	I1217 01:32:00.967097 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:00.967175 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.970953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.975717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:00.975791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:01.010237 1225677 cri.go:89] found id: ""
	I1217 01:32:01.010262 1225677 logs.go:282] 0 containers: []
	W1217 01:32:01.010272 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:01.010281 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:01.010294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:01.030320 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:01.030353 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:01.055381 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:01.055409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:01.097515 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:01.097548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:01.166756 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:01.166797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:01.208792 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:01.208824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:01.246024 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:01.246056 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:01.340436 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:01.340519 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:01.412662 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:01.412684 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:01.412699 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:01.467190 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:01.467228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:01.500459 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:01.500486 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:01.531449 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:01.531477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:04.134627 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:04.145902 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:04.145978 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:04.185746 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.185766 1225677 cri.go:89] found id: ""
	I1217 01:32:04.185774 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:04.185831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.189797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:04.189867 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:04.228673 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.228694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.228698 1225677 cri.go:89] found id: ""
	I1217 01:32:04.228706 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:04.228759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.233260 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.238075 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:04.238212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:04.268955 1225677 cri.go:89] found id: ""
	I1217 01:32:04.268983 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.268992 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:04.268999 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:04.269102 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:04.299973 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.300041 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.300061 1225677 cri.go:89] found id: ""
	I1217 01:32:04.300088 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:04.300185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.303813 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.307456 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:04.307533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:04.334293 1225677 cri.go:89] found id: ""
	I1217 01:32:04.334319 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.334331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:04.334338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:04.334398 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:04.360886 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.360906 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.360910 1225677 cri.go:89] found id: ""
	I1217 01:32:04.360918 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:04.360974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.365024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.368933 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:04.369005 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:04.397116 1225677 cri.go:89] found id: ""
	I1217 01:32:04.397140 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.397149 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:04.397159 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:04.397174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:04.490637 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:04.490721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.531861 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:04.531938 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.577801 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:04.577838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.635487 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:04.635524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.667260 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:04.667290 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:04.718117 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:04.718146 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:04.737680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:04.737711 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:04.825872 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:04.825894 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:04.825908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.858804 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:04.858833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.887920 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:04.887953 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.916371 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:04.916476 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:07.492728 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:07.504442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:07.504532 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:07.538372 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.538403 1225677 cri.go:89] found id: ""
	I1217 01:32:07.538442 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:07.538517 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.542523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:07.542597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:07.576339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:07.576360 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:07.576364 1225677 cri.go:89] found id: ""
	I1217 01:32:07.576372 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:07.576459 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.580149 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.584111 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:07.584196 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:07.610578 1225677 cri.go:89] found id: ""
	I1217 01:32:07.610605 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.610614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:07.610621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:07.610678 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:07.637129 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:07.637151 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:07.637157 1225677 cri.go:89] found id: ""
	I1217 01:32:07.637164 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:07.637217 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.641090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.644872 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:07.644992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:07.679300 1225677 cri.go:89] found id: ""
	I1217 01:32:07.679322 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.679331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:07.679350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:07.679419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:07.719129 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:07.719155 1225677 cri.go:89] found id: ""
	I1217 01:32:07.719164 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:07.719231 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.723681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:07.723755 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:07.756924 1225677 cri.go:89] found id: ""
	I1217 01:32:07.756950 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.756969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:07.756979 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:07.756991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:07.856049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:07.856088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:07.935429 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:07.935456 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:07.935469 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.961013 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:07.961042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:08.005989 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:08.006024 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:08.039061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:08.039092 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:08.058159 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:08.058194 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:08.112456 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:08.112490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:08.176389 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:08.176457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:08.215782 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:08.215809 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:08.244713 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:08.244743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:10.828143 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:10.838717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:10.838793 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:10.869672 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:10.869696 1225677 cri.go:89] found id: ""
	I1217 01:32:10.869705 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:10.869761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.873603 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:10.873720 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:10.900811 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:10.900837 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:10.900843 1225677 cri.go:89] found id: ""
	I1217 01:32:10.900851 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:10.900906 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.904643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.908193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:10.908261 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:10.935598 1225677 cri.go:89] found id: ""
	I1217 01:32:10.935624 1225677 logs.go:282] 0 containers: []
	W1217 01:32:10.935634 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:10.935641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:10.935698 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:10.966869 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:10.966894 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:10.966899 1225677 cri.go:89] found id: ""
	I1217 01:32:10.966907 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:10.966962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.970920 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.974605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:10.974715 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:11.012577 1225677 cri.go:89] found id: ""
	I1217 01:32:11.012602 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.012612 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:11.012618 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:11.012680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:11.048075 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.048100 1225677 cri.go:89] found id: ""
	I1217 01:32:11.048130 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:11.048185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:11.052014 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:11.052089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:11.084486 1225677 cri.go:89] found id: ""
	I1217 01:32:11.084511 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.084524 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:11.084533 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:11.084545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:11.192042 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:11.192076 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:11.218345 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:11.218378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:11.261837 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:11.261869 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:11.321100 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:11.321138 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:11.356360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:11.356390 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:11.433012 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:11.433054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:11.511248 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:11.511270 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:11.511287 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:11.549584 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:11.549614 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:11.596753 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:11.596786 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.626208 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:11.626240 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.173611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:14.187629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:14.187704 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:14.223146 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.223170 1225677 cri.go:89] found id: ""
	I1217 01:32:14.223179 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:14.223264 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.227607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:14.227721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:14.255753 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:14.255791 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.255796 1225677 cri.go:89] found id: ""
	I1217 01:32:14.255804 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:14.255881 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.259963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.263644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:14.263717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:14.290575 1225677 cri.go:89] found id: ""
	I1217 01:32:14.290599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.290614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:14.290621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:14.290681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:14.318287 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.318309 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.318314 1225677 cri.go:89] found id: ""
	I1217 01:32:14.318323 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:14.318378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.322352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.326073 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:14.326157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:14.352179 1225677 cri.go:89] found id: ""
	I1217 01:32:14.352205 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.352214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:14.352221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:14.352304 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:14.380539 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.380565 1225677 cri.go:89] found id: ""
	I1217 01:32:14.380582 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:14.380678 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.385134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:14.385210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:14.417374 1225677 cri.go:89] found id: ""
	I1217 01:32:14.417407 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.417417 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:14.417441 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:14.417457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.464173 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:14.464209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.491958 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:14.492035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.547112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:14.547180 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:14.617502 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:14.617525 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:14.617548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.645669 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:14.645697 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.705027 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:14.705070 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.738615 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:14.738689 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:14.819881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:14.819961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:14.917702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:14.917739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:14.940092 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:14.940127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.482077 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:17.493126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:17.493227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:17.520116 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.520137 1225677 cri.go:89] found id: ""
	I1217 01:32:17.520155 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:17.520234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.524492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:17.524572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:17.553355 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.553419 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:17.553439 1225677 cri.go:89] found id: ""
	I1217 01:32:17.553454 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:17.553512 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.557145 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.560580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:17.560663 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:17.586798 1225677 cri.go:89] found id: ""
	I1217 01:32:17.586824 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.586843 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:17.586850 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:17.586915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:17.614063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.614096 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:17.614102 1225677 cri.go:89] found id: ""
	I1217 01:32:17.614110 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:17.614174 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.618083 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.621593 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:17.621662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:17.652917 1225677 cri.go:89] found id: ""
	I1217 01:32:17.652943 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.652964 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:17.652972 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:17.653029 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:17.679412 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.679435 1225677 cri.go:89] found id: ""
	I1217 01:32:17.679443 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:17.679508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.683530 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:17.683606 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:17.714591 1225677 cri.go:89] found id: ""
	I1217 01:32:17.714618 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.714628 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:17.714638 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:17.714652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.774158 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:17.774193 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.802731 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:17.802759 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:17.837385 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:17.837413 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:17.948723 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:17.948766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:17.967594 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:17.967622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.997257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:17.997350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:18.046163 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:18.046204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:18.075264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:18.075345 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:18.179955 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:18.180007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:18.261983 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:18.262017 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:18.262034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.814850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:20.826637 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:20.826710 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:20.867818 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:20.867839 1225677 cri.go:89] found id: ""
	I1217 01:32:20.867847 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:20.867902 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.871814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:20.871895 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:20.902722 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.902742 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:20.902746 1225677 cri.go:89] found id: ""
	I1217 01:32:20.902755 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:20.902808 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.907236 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.911156 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:20.911230 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:20.937933 1225677 cri.go:89] found id: ""
	I1217 01:32:20.937959 1225677 logs.go:282] 0 containers: []
	W1217 01:32:20.937968 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:20.937974 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:20.938063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:20.965558 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:20.965581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:20.965587 1225677 cri.go:89] found id: ""
	I1217 01:32:20.965595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:20.965652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.969565 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.973428 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:20.973498 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:21.012487 1225677 cri.go:89] found id: ""
	I1217 01:32:21.012512 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.012521 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:21.012527 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:21.012590 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:21.041411 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.041443 1225677 cri.go:89] found id: ""
	I1217 01:32:21.041455 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:21.041515 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:21.045571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:21.045672 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:21.074982 1225677 cri.go:89] found id: ""
	I1217 01:32:21.075005 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.075014 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:21.075023 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:21.075036 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:21.105151 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:21.105181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.131324 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:21.131398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:21.228426 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:21.228461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:21.285988 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:21.286020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:21.369964 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:21.370005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:21.406263 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:21.406295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:21.425680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:21.425710 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:21.503044 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:21.503067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:21.503083 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:21.533119 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:21.533147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:21.584619 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:21.584652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.145239 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:24.156031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:24.156112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:24.191491 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.191515 1225677 cri.go:89] found id: ""
	I1217 01:32:24.191523 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:24.191579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.196271 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:24.196344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:24.229412 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.229433 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.229437 1225677 cri.go:89] found id: ""
	I1217 01:32:24.229445 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:24.229502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.233353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.237055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:24.237137 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:24.264226 1225677 cri.go:89] found id: ""
	I1217 01:32:24.264252 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.264262 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:24.264268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:24.264330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:24.300946 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.300972 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.300977 1225677 cri.go:89] found id: ""
	I1217 01:32:24.300984 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:24.301038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.304900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.308160 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:24.308277 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:24.334573 1225677 cri.go:89] found id: ""
	I1217 01:32:24.334596 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.334606 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:24.334612 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:24.334670 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:24.367769 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.367791 1225677 cri.go:89] found id: ""
	I1217 01:32:24.367800 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:24.367853 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.371482 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:24.371586 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:24.398071 1225677 cri.go:89] found id: ""
	I1217 01:32:24.398095 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.398104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:24.398112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:24.398124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:24.466998 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:24.467073 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:24.467093 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.494797 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:24.494826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.566818 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:24.566859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.627760 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:24.627797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.657250 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:24.657278 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.683514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:24.683549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:24.703093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:24.703129 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.757376 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:24.757411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:24.839791 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:24.839826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:24.883947 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:24.883978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:27.492559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:27.503372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:27.503445 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:27.541590 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:27.541611 1225677 cri.go:89] found id: ""
	I1217 01:32:27.541620 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:27.541675 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.545373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:27.545448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:27.571462 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:27.571486 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:27.571491 1225677 cri.go:89] found id: ""
	I1217 01:32:27.571499 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:27.571555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.575671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.579240 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:27.579332 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:27.612215 1225677 cri.go:89] found id: ""
	I1217 01:32:27.612245 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.612254 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:27.612261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:27.612339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:27.639672 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:27.639696 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.639701 1225677 cri.go:89] found id: ""
	I1217 01:32:27.639708 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:27.639782 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.643953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.647820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:27.647942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:27.673115 1225677 cri.go:89] found id: ""
	I1217 01:32:27.673141 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.673150 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:27.673157 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:27.673215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:27.703404 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.703428 1225677 cri.go:89] found id: ""
	I1217 01:32:27.703437 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:27.703566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.708031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:27.708106 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:27.736748 1225677 cri.go:89] found id: ""
	I1217 01:32:27.736770 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.736779 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:27.736789 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:27.736802 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.763699 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:27.763727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.790990 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:27.791020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:27.871644 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:27.871680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:27.904392 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:27.904499 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:27.926297 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:27.926333 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:28.002149 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:28.002177 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:28.002196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:28.030901 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:28.030933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:28.070431 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:28.070463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:28.124957 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:28.124994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:28.185427 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:28.185465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:30.787761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:30.798953 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:30.799025 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:30.826532 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:30.826561 1225677 cri.go:89] found id: ""
	I1217 01:32:30.826570 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:30.826631 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.830429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:30.830503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:30.856397 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:30.856449 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:30.856462 1225677 cri.go:89] found id: ""
	I1217 01:32:30.856470 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:30.856524 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.860460 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.864121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:30.864204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:30.893119 1225677 cri.go:89] found id: ""
	I1217 01:32:30.893143 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.893153 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:30.893166 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:30.893225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:30.942371 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:30.942393 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:30.942398 1225677 cri.go:89] found id: ""
	I1217 01:32:30.942406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:30.942463 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.947748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.953053 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:30.953140 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:30.991763 1225677 cri.go:89] found id: ""
	I1217 01:32:30.991793 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.991802 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:30.991817 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:30.991888 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:31.026936 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.026958 1225677 cri.go:89] found id: ""
	I1217 01:32:31.026967 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:31.027022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:31.031253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:31.031338 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:31.060606 1225677 cri.go:89] found id: ""
	I1217 01:32:31.060632 1225677 logs.go:282] 0 containers: []
	W1217 01:32:31.060641 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:31.060650 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:31.060666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.089805 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:31.089837 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:31.179774 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:31.179814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:31.231705 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:31.231739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:31.264982 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:31.265014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:31.295319 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:31.295348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:31.398598 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:31.398635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:31.418439 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:31.418473 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:31.505328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:31.505348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:31.505364 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:31.534574 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:31.534604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:31.584571 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:31.584607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.145660 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:34.156555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:34.156680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:34.189334 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.189353 1225677 cri.go:89] found id: ""
	I1217 01:32:34.189361 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:34.189415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.193025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:34.193117 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:34.229137 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.229160 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.229165 1225677 cri.go:89] found id: ""
	I1217 01:32:34.229176 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:34.229234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.232921 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.236260 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:34.236361 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:34.264990 1225677 cri.go:89] found id: ""
	I1217 01:32:34.265013 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.265022 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:34.265028 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:34.265086 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:34.292130 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.292205 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.292225 1225677 cri.go:89] found id: ""
	I1217 01:32:34.292250 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:34.292344 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.295987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.299388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:34.299500 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:34.325943 1225677 cri.go:89] found id: ""
	I1217 01:32:34.326026 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.326042 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:34.326049 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:34.326108 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:34.363328 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.363351 1225677 cri.go:89] found id: ""
	I1217 01:32:34.363361 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:34.363415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.367803 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:34.367878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:34.394984 1225677 cri.go:89] found id: ""
	I1217 01:32:34.395011 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.395020 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:34.395029 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:34.395065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:34.470015 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:34.470036 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:34.470049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.496057 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:34.496091 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.549522 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:34.549555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.592693 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:34.592728 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.652425 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:34.652505 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.680716 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:34.680747 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.707492 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:34.707522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:34.787410 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:34.787492 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:34.892246 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:34.892284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:34.910499 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:34.910530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:37.463203 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:37.474127 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:37.474200 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:37.506946 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.507018 1225677 cri.go:89] found id: ""
	I1217 01:32:37.507042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:37.507123 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.511460 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:37.511535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:37.546992 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:37.547014 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:37.547020 1225677 cri.go:89] found id: ""
	I1217 01:32:37.547028 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:37.547090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.550864 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.554364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:37.554450 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:37.592224 1225677 cri.go:89] found id: ""
	I1217 01:32:37.592353 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.592394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:37.592437 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:37.592579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:37.620557 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.620581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:37.620587 1225677 cri.go:89] found id: ""
	I1217 01:32:37.620595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:37.620691 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.624719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.628465 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:37.628541 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:37.657843 1225677 cri.go:89] found id: ""
	I1217 01:32:37.657870 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.657878 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:37.657885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:37.657955 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:37.686792 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.686825 1225677 cri.go:89] found id: ""
	I1217 01:32:37.686834 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:37.686898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.690651 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:37.690783 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:37.719977 1225677 cri.go:89] found id: ""
	I1217 01:32:37.720000 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.720009 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:37.720018 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:37.720030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:37.738580 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:37.738610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:37.814847 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:37.814869 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:37.814883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.840694 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:37.840723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.901817 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:37.901855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.935757 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:37.935839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:38.014642 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:38.014679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:38.115079 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:38.115123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:38.157390 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:38.157423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:38.204086 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:38.204123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:38.235323 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:38.235355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:40.766175 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:40.777746 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:40.777818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:40.809026 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:40.809051 1225677 cri.go:89] found id: ""
	I1217 01:32:40.809060 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:40.809157 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.813212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:40.813294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:40.840793 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:40.840821 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:40.840826 1225677 cri.go:89] found id: ""
	I1217 01:32:40.840834 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:40.840915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.845018 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.848655 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:40.848732 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:40.875726 1225677 cri.go:89] found id: ""
	I1217 01:32:40.875750 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.875761 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:40.875767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:40.875825 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:40.902504 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:40.902527 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:40.902532 1225677 cri.go:89] found id: ""
	I1217 01:32:40.902540 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:40.902593 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.906394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.910259 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:40.910330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:40.936570 1225677 cri.go:89] found id: ""
	I1217 01:32:40.936599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.936609 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:40.936616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:40.936676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:40.964358 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:40.964381 1225677 cri.go:89] found id: ""
	I1217 01:32:40.964389 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:40.964541 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.968221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:40.968292 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:40.998606 1225677 cri.go:89] found id: ""
	I1217 01:32:40.998633 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.998644 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:40.998654 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:40.998668 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:41.022520 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:41.022551 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:41.051598 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:41.051625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:41.091115 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:41.091148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:41.159179 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:41.159223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:41.190970 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:41.190997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:41.225786 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:41.225815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:41.294484 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:41.294509 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:41.294523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:41.346979 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:41.347017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:41.374095 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:41.374126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:41.456622 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:41.456658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.066375 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:44.077293 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:44.077365 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:44.104332 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.104476 1225677 cri.go:89] found id: ""
	I1217 01:32:44.104504 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:44.104580 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.108715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:44.108799 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:44.140649 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.140672 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.140677 1225677 cri.go:89] found id: ""
	I1217 01:32:44.140684 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:44.140763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.144834 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.148730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:44.148811 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:44.197233 1225677 cri.go:89] found id: ""
	I1217 01:32:44.197259 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.197268 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:44.197274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:44.197350 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:44.240339 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:44.240363 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.240368 1225677 cri.go:89] found id: ""
	I1217 01:32:44.240376 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:44.240456 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.244962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.248793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:44.248913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:44.278464 1225677 cri.go:89] found id: ""
	I1217 01:32:44.278491 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.278501 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:44.278507 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:44.278585 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:44.308914 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.308938 1225677 cri.go:89] found id: ""
	I1217 01:32:44.308958 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:44.309048 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.313878 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:44.313951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:44.344530 1225677 cri.go:89] found id: ""
	I1217 01:32:44.344555 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.344577 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:44.344588 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:44.344600 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.372833 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:44.372864 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:44.452952 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:44.452990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:44.474609 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:44.474642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:44.552482 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:44.552507 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:44.552521 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.580322 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:44.580352 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.610292 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:44.610320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:44.643236 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:44.643266 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.755542 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:44.755601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.808715 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:44.808771 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.856301 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:44.856338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.419847 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:47.431877 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:47.431951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:47.461659 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:47.461682 1225677 cri.go:89] found id: ""
	I1217 01:32:47.461690 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:47.461747 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.465698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:47.465822 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:47.495157 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.495179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.495184 1225677 cri.go:89] found id: ""
	I1217 01:32:47.495192 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:47.495247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.499337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.503995 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:47.504080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:47.543135 1225677 cri.go:89] found id: ""
	I1217 01:32:47.543158 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.543167 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:47.543174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:47.543238 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:47.572765 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.572791 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:47.572797 1225677 cri.go:89] found id: ""
	I1217 01:32:47.572804 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:47.572867 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.577796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.581659 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:47.581760 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:47.612595 1225677 cri.go:89] found id: ""
	I1217 01:32:47.612660 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.612674 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:47.612681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:47.612744 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:47.642199 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:47.642223 1225677 cri.go:89] found id: ""
	I1217 01:32:47.642231 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:47.642287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.646215 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:47.646285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:47.672805 1225677 cri.go:89] found id: ""
	I1217 01:32:47.672830 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.672839 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:47.672849 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:47.672859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:47.702885 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:47.702917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:47.723284 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:47.723318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:47.799644 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:47.799674 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:47.799688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.839852 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:47.839884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.888519 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:47.888557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:47.973305 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:47.973344 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:48.081814 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:48.081853 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:48.114561 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:48.114590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:48.208193 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:48.208234 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:48.241262 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:48.241293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.770940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:50.781882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:50.781951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:50.809569 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:50.809595 1225677 cri.go:89] found id: ""
	I1217 01:32:50.809604 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:50.809665 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.814519 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:50.814594 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:50.849443 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:50.849472 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:50.849478 1225677 cri.go:89] found id: ""
	I1217 01:32:50.849486 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:50.849564 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.853510 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.857119 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:50.857224 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:50.888246 1225677 cri.go:89] found id: ""
	I1217 01:32:50.888275 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.888284 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:50.888291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:50.888351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:50.916294 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:50.916320 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:50.916326 1225677 cri.go:89] found id: ""
	I1217 01:32:50.916333 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:50.916388 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.920299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.924658 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:50.924730 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:50.957966 1225677 cri.go:89] found id: ""
	I1217 01:32:50.957994 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.958003 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:50.958009 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:50.958069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:50.991282 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.991304 1225677 cri.go:89] found id: ""
	I1217 01:32:50.991312 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:50.991377 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.995730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:50.995797 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:51.034122 1225677 cri.go:89] found id: ""
	I1217 01:32:51.034199 1225677 logs.go:282] 0 containers: []
	W1217 01:32:51.034238 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:51.034266 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:51.034295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:51.062022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:51.062100 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:51.081698 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:51.081733 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:51.112382 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:51.112482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:51.172152 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:51.172190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:51.213603 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:51.213634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:51.297400 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:51.297439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:51.331335 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:51.331412 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:51.426253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:51.426289 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:51.499310 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:51.499332 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:51.499348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:51.572760 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:51.572795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.122214 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:54.133644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:54.133721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:54.162887 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.162912 1225677 cri.go:89] found id: ""
	I1217 01:32:54.162922 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:54.162978 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.167057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:54.167127 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:54.205900 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.205920 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.205925 1225677 cri.go:89] found id: ""
	I1217 01:32:54.205932 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:54.205987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.210350 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.214343 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:54.214419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:54.246321 1225677 cri.go:89] found id: ""
	I1217 01:32:54.246348 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.246357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:54.246364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:54.246424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:54.276281 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.276305 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.276310 1225677 cri.go:89] found id: ""
	I1217 01:32:54.276319 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:54.276379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.281009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.285204 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:54.285281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:54.311149 1225677 cri.go:89] found id: ""
	I1217 01:32:54.311225 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.311251 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:54.311268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:54.311342 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:54.339737 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.339763 1225677 cri.go:89] found id: ""
	I1217 01:32:54.339771 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:54.339825 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.343615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:54.343749 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:54.370945 1225677 cri.go:89] found id: ""
	I1217 01:32:54.370971 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.370981 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:54.370991 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:54.371003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:54.390464 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:54.390495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:54.470328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:54.470363 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:54.470377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.495970 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:54.495999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.557300 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:54.557336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.585791 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:54.585821 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.612126 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:54.612152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:54.653218 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:54.653246 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:54.752385 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:54.752432 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.814139 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:54.814175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.885191 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:54.885226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:57.468539 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:57.479841 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:57.479913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:57.511032 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.511058 1225677 cri.go:89] found id: ""
	I1217 01:32:57.511067 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:57.511130 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.515373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:57.515446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:57.558508 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.558531 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:57.558537 1225677 cri.go:89] found id: ""
	I1217 01:32:57.558550 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:57.558622 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.563150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.567245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:57.567322 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:57.594294 1225677 cri.go:89] found id: ""
	I1217 01:32:57.594330 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.594341 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:57.594347 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:57.594411 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:57.626077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:57.626100 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.626106 1225677 cri.go:89] found id: ""
	I1217 01:32:57.626114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:57.626173 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.630289 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.634055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:57.634130 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:57.661683 1225677 cri.go:89] found id: ""
	I1217 01:32:57.661711 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.661721 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:57.661727 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:57.661785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:57.690521 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.690556 1225677 cri.go:89] found id: ""
	I1217 01:32:57.690565 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:57.690632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.694587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:57.694687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:57.721760 1225677 cri.go:89] found id: ""
	I1217 01:32:57.721783 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.721792 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:57.721801 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:57.721830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.749279 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:57.749308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.781988 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:57.782017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:57.820059 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:57.820089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:57.841084 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:57.841121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.884653 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:57.884752 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.932570 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:57.932605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:58.015607 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:58.015649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:58.116442 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:58.116479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:58.205896 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:58.205921 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:58.205934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:58.252524 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:58.252595 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.831933 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:00.843915 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:00.844011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:00.872994 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:00.873018 1225677 cri.go:89] found id: ""
	I1217 01:33:00.873027 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:00.873080 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.876819 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:00.876914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:00.904306 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:00.904329 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:00.904334 1225677 cri.go:89] found id: ""
	I1217 01:33:00.904342 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:00.904397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.908029 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.911563 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:00.911642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:00.940652 1225677 cri.go:89] found id: ""
	I1217 01:33:00.940678 1225677 logs.go:282] 0 containers: []
	W1217 01:33:00.940687 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:00.940694 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:00.940752 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:00.967462 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.967503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:00.967514 1225677 cri.go:89] found id: ""
	I1217 01:33:00.967522 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:00.967601 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.971689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.976107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:00.976187 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:01.015150 1225677 cri.go:89] found id: ""
	I1217 01:33:01.015230 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.015253 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:01.015273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:01.015366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:01.044488 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.044553 1225677 cri.go:89] found id: ""
	I1217 01:33:01.044578 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:01.044671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:01.048372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:01.048523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:01.083014 1225677 cri.go:89] found id: ""
	I1217 01:33:01.083096 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.083121 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:01.083173 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:01.083208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:01.181547 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:01.181588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:01.202930 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:01.202966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:01.255543 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:01.255580 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:01.282899 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:01.282927 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.310357 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:01.310387 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:01.361428 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:01.361458 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:01.439491 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:01.439564 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:01.439594 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:01.466548 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:01.466575 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:01.524293 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:01.524332 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:01.603276 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:01.603314 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.194004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:04.206859 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:04.206931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:04.245597 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.245621 1225677 cri.go:89] found id: ""
	I1217 01:33:04.245630 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:04.245688 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.249418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:04.249489 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:04.278257 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.278277 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.278284 1225677 cri.go:89] found id: ""
	I1217 01:33:04.278291 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:04.278405 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.282613 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.286801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:04.286878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:04.313756 1225677 cri.go:89] found id: ""
	I1217 01:33:04.313825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.313852 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:04.313866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:04.313946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:04.343505 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.343528 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.343533 1225677 cri.go:89] found id: ""
	I1217 01:33:04.343542 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:04.343595 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.347432 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.351245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:04.351318 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:04.378415 1225677 cri.go:89] found id: ""
	I1217 01:33:04.378443 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.378453 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:04.378461 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:04.378523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:04.404603 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.404635 1225677 cri.go:89] found id: ""
	I1217 01:33:04.404645 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:04.404699 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.408372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:04.408490 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:04.435025 1225677 cri.go:89] found id: ""
	I1217 01:33:04.435053 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.435063 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:04.435072 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:04.435084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:04.453398 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:04.453431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:04.532185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:04.532207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:04.532220 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.565093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:04.565122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.608097 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:04.608141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.669592 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:04.669635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.698199 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:04.698230 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.781891 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:04.781933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:04.889443 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:04.889483 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.935503 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:04.935540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.962255 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:04.962288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.497519 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:07.509544 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:07.509619 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:07.541912 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.541930 1225677 cri.go:89] found id: ""
	I1217 01:33:07.541938 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:07.541998 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.545880 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:07.545967 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:07.576061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.576085 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:07.576090 1225677 cri.go:89] found id: ""
	I1217 01:33:07.576098 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:07.576156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.580118 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.584118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:07.584216 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:07.613260 1225677 cri.go:89] found id: ""
	I1217 01:33:07.613288 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.613297 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:07.613304 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:07.613390 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:07.643089 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:07.643113 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:07.643118 1225677 cri.go:89] found id: ""
	I1217 01:33:07.643126 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:07.643181 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.646892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.650360 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:07.650433 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:07.677367 1225677 cri.go:89] found id: ""
	I1217 01:33:07.677393 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.677403 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:07.677409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:07.677515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:07.705475 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.705499 1225677 cri.go:89] found id: ""
	I1217 01:33:07.705508 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:07.705588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.709429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:07.709538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:07.737814 1225677 cri.go:89] found id: ""
	I1217 01:33:07.737838 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.737846 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:07.737855 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:07.737867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.767138 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:07.767166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.800084 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:07.800165 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:07.820093 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:07.820124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:07.887706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:07.887729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:07.887744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.915091 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:07.915122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.956054 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:07.956116 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:08.019066 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:08.019105 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:08.080377 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:08.080423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:08.124710 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:08.124793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:08.214495 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:08.214593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:10.827104 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:10.838284 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:10.838422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:10.874165 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:10.874184 1225677 cri.go:89] found id: ""
	I1217 01:33:10.874192 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:10.874245 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.878108 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:10.878180 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:10.903766 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:10.903789 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:10.903794 1225677 cri.go:89] found id: ""
	I1217 01:33:10.903802 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:10.903857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.907574 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.911142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:10.911214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:10.938246 1225677 cri.go:89] found id: ""
	I1217 01:33:10.938273 1225677 logs.go:282] 0 containers: []
	W1217 01:33:10.938283 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:10.938289 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:10.938347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:10.964843 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:10.964866 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:10.964871 1225677 cri.go:89] found id: ""
	I1217 01:33:10.964879 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:10.964935 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.968730 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.972392 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:10.972503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:11.008562 1225677 cri.go:89] found id: ""
	I1217 01:33:11.008590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.008600 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:11.008607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:11.008716 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:11.041307 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.041342 1225677 cri.go:89] found id: ""
	I1217 01:33:11.041352 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:11.041408 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:11.045319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:11.045394 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:11.072727 1225677 cri.go:89] found id: ""
	I1217 01:33:11.072757 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.072771 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:11.072781 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:11.072793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:11.092411 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:11.092531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:11.173959 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:11.173986 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:11.174000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:11.204098 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:11.204130 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:11.265126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:11.265169 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:11.329309 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:11.329350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:11.366487 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:11.366516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:11.449439 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:11.449474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:11.493614 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:11.493648 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.530111 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:11.530142 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:11.573692 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:11.573724 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.175120 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:14.187102 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:14.187212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:14.217900 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.217923 1225677 cri.go:89] found id: ""
	I1217 01:33:14.217933 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:14.217993 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.228556 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:14.228632 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:14.256615 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.256694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.256731 1225677 cri.go:89] found id: ""
	I1217 01:33:14.256747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:14.256855 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.260873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.264886 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:14.264982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:14.293944 1225677 cri.go:89] found id: ""
	I1217 01:33:14.294012 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.294036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:14.294057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:14.294149 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:14.322566 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.322586 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.322591 1225677 cri.go:89] found id: ""
	I1217 01:33:14.322599 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:14.322693 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.326575 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.330162 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:14.330237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:14.356466 1225677 cri.go:89] found id: ""
	I1217 01:33:14.356491 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.356500 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:14.356506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:14.356566 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:14.386031 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:14.386055 1225677 cri.go:89] found id: ""
	I1217 01:33:14.386064 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:14.386142 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.390030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:14.390110 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:14.416257 1225677 cri.go:89] found id: ""
	I1217 01:33:14.416284 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.416293 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:14.416303 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:14.416317 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.511192 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:14.511232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:14.604109 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:14.604132 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:14.604148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.656861 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:14.656895 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.685614 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:14.685642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:14.764169 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:14.764208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:14.812699 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:14.812730 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:14.831513 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:14.831547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.858309 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:14.858339 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.909041 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:14.909072 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.975681 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:14.975723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.515279 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:17.540730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:17.540806 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:17.570081 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:17.570102 1225677 cri.go:89] found id: ""
	I1217 01:33:17.570110 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:17.570178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.574399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:17.574471 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:17.599589 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:17.599610 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:17.599614 1225677 cri.go:89] found id: ""
	I1217 01:33:17.599622 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:17.599689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.604570 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.608574 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:17.608645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:17.635229 1225677 cri.go:89] found id: ""
	I1217 01:33:17.635306 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.635329 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:17.635350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:17.635422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:17.668964 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:17.669003 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.669009 1225677 cri.go:89] found id: ""
	I1217 01:33:17.669017 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:17.669103 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.673057 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.677753 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:17.677826 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:17.707206 1225677 cri.go:89] found id: ""
	I1217 01:33:17.707245 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.707255 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:17.707261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:17.707325 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:17.740289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.740313 1225677 cri.go:89] found id: ""
	I1217 01:33:17.740322 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:17.740385 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.744409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:17.744515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:17.771770 1225677 cri.go:89] found id: ""
	I1217 01:33:17.771797 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.771806 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:17.771815 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:17.771828 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.800155 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:17.800190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:17.882443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:17.882481 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:17.935750 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:17.935781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:17.954392 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:17.954425 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:18.031535 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:18.031568 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:18.031585 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:18.079987 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:18.080029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:18.108390 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:18.108454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:18.206148 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:18.206190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:18.238865 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:18.238894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:18.280200 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:18.280236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.844541 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:20.855183 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:20.855255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:20.883645 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:20.883666 1225677 cri.go:89] found id: ""
	I1217 01:33:20.883673 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:20.883731 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.888021 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:20.888094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:20.917299 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:20.917325 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:20.917330 1225677 cri.go:89] found id: ""
	I1217 01:33:20.917338 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:20.917397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.921256 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.925997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:20.926069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:20.952872 1225677 cri.go:89] found id: ""
	I1217 01:33:20.952898 1225677 logs.go:282] 0 containers: []
	W1217 01:33:20.952907 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:20.952913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:20.952970 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:20.979961 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.979983 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:20.979989 1225677 cri.go:89] found id: ""
	I1217 01:33:20.979998 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:20.980064 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.984302 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.989098 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:20.989171 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:21.023299 1225677 cri.go:89] found id: ""
	I1217 01:33:21.023365 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.023382 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:21.023390 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:21.023454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:21.052742 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.052763 1225677 cri.go:89] found id: ""
	I1217 01:33:21.052773 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:21.052830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:21.056774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:21.056847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:21.086360 1225677 cri.go:89] found id: ""
	I1217 01:33:21.086382 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.086391 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:21.086399 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:21.086411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:21.114471 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:21.114500 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:21.213416 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:21.213451 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:21.294188 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:21.294212 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:21.294253 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:21.321989 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:21.322022 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:21.361898 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:21.361940 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:21.415113 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:21.415151 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.443169 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:21.443202 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:21.538356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:21.538403 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:21.584226 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:21.584255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:21.602588 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:21.602625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.196991 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:24.207442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:24.207518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:24.243683 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.243708 1225677 cri.go:89] found id: ""
	I1217 01:33:24.243717 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:24.243772 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.247370 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:24.247444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:24.274124 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.274153 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.274159 1225677 cri.go:89] found id: ""
	I1217 01:33:24.274167 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:24.274224 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.277936 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.281546 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:24.281628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:24.310864 1225677 cri.go:89] found id: ""
	I1217 01:33:24.310893 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.310903 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:24.310910 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:24.310968 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:24.342620 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.342643 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.342648 1225677 cri.go:89] found id: ""
	I1217 01:33:24.342656 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:24.342714 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.346873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.350690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:24.350776 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:24.378447 1225677 cri.go:89] found id: ""
	I1217 01:33:24.378476 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.378486 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:24.378510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:24.378592 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:24.410097 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.410122 1225677 cri.go:89] found id: ""
	I1217 01:33:24.410132 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:24.410193 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.414020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:24.414094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:24.440741 1225677 cri.go:89] found id: ""
	I1217 01:33:24.440825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.440851 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:24.440879 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:24.440912 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:24.460132 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:24.460163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.493812 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:24.493842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.536741 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:24.536777 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.597219 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:24.597260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.663765 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:24.663805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.703808 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:24.703840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:24.784250 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:24.784288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:24.883741 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:24.883779 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:24.962818 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:24.962842 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:24.962856 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.994828 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:24.994858 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:27.546732 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:27.564740 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:27.564805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:27.608525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:27.608549 1225677 cri.go:89] found id: ""
	I1217 01:33:27.608558 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:27.608611 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.613062 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:27.613135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:27.659805 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:27.659827 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:27.659831 1225677 cri.go:89] found id: ""
	I1217 01:33:27.659838 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:27.659896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.664210 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.668351 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:27.668446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:27.704696 1225677 cri.go:89] found id: ""
	I1217 01:33:27.704771 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.704794 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:27.704815 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:27.704898 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:27.738798 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:27.738821 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:27.738827 1225677 cri.go:89] found id: ""
	I1217 01:33:27.738834 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:27.738896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.743026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.746985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:27.747059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:27.785087 1225677 cri.go:89] found id: ""
	I1217 01:33:27.785111 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.785119 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:27.785126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:27.785192 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:27.818270 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:27.818289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:27.818293 1225677 cri.go:89] found id: ""
	I1217 01:33:27.818300 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:27.818356 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.822652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.826638 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:27.826695 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:27.865573 1225677 cri.go:89] found id: ""
	I1217 01:33:27.865604 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.865613 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:27.865623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:27.865634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:27.972193 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:27.972232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:28.056562 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:28.056589 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:28.056605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:28.085398 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:28.085429 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:28.132214 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:28.132252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:28.174271 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:28.174303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:28.273045 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:28.273082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:28.321799 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:28.321880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:28.342146 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:28.342292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:28.406933 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:28.407120 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:28.498600 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:28.498680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:28.534124 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:28.534150 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.091052 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:31.103205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:31.103279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:31.140533 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.140556 1225677 cri.go:89] found id: ""
	I1217 01:33:31.140564 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:31.140627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.145121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:31.145202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:31.175735 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.175761 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.175768 1225677 cri.go:89] found id: ""
	I1217 01:33:31.175775 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:31.175832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.180026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.184555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:31.184628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:31.213074 1225677 cri.go:89] found id: ""
	I1217 01:33:31.213100 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.213110 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:31.213117 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:31.213174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:31.251260 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.251286 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.251291 1225677 cri.go:89] found id: ""
	I1217 01:33:31.251299 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:31.251354 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.255625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.259649 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:31.259726 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:31.287030 1225677 cri.go:89] found id: ""
	I1217 01:33:31.287056 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.287065 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:31.287072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:31.287128 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:31.314782 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.314851 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.314876 1225677 cri.go:89] found id: ""
	I1217 01:33:31.314902 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:31.314984 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.320071 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.324354 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:31.324534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:31.357412 1225677 cri.go:89] found id: ""
	I1217 01:33:31.357439 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.357449 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:31.357464 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:31.357480 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:31.462967 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:31.463006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:31.482965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:31.482995 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:31.552928 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:31.552952 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:31.552966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.579435 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:31.579470 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.619907 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:31.619945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.687595 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:31.687636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.720143 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:31.720175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.746106 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:31.746135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.812096 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:31.812131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.841610 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:31.841646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:31.920159 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:31.920197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:34.457713 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:34.469492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:34.469574 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:34.497755 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:34.497777 1225677 cri.go:89] found id: ""
	I1217 01:33:34.497786 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:34.497850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.501620 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:34.501703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:34.532206 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:34.532227 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:34.532231 1225677 cri.go:89] found id: ""
	I1217 01:33:34.532238 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:34.532299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.537376 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.541069 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:34.541142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:34.577690 1225677 cri.go:89] found id: ""
	I1217 01:33:34.577730 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.577740 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:34.577763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:34.577844 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:34.606156 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.606176 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:34.606180 1225677 cri.go:89] found id: ""
	I1217 01:33:34.606188 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:34.606243 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.610716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.614894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:34.614990 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:34.644563 1225677 cri.go:89] found id: ""
	I1217 01:33:34.644590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.644599 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:34.644605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:34.644685 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:34.673641 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:34.673666 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:34.673671 1225677 cri.go:89] found id: ""
	I1217 01:33:34.673679 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:34.673737 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.677531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.681295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:34.681370 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:34.708990 1225677 cri.go:89] found id: ""
	I1217 01:33:34.709071 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.709088 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:34.709099 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:34.709111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:34.809701 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:34.809785 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:34.828178 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:34.828210 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:34.903131 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:34.903155 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:34.903168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.971266 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:34.971304 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:35.004179 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:35.004215 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:35.041784 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:35.041815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:35.067541 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:35.067571 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:35.126841 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:35.126874 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:35.172191 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:35.172226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:35.200255 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:35.200295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:35.239991 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:35.240030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:37.824762 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:37.835623 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:37.835693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:37.865989 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:37.866008 1225677 cri.go:89] found id: ""
	I1217 01:33:37.866018 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:37.866073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.869857 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:37.869946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:37.898865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:37.898940 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:37.898960 1225677 cri.go:89] found id: ""
	I1217 01:33:37.898986 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:37.899093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.903232 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.907211 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:37.907281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:37.939280 1225677 cri.go:89] found id: ""
	I1217 01:33:37.939302 1225677 logs.go:282] 0 containers: []
	W1217 01:33:37.939311 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:37.939318 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:37.939379 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:37.967924 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:37.967945 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:37.967949 1225677 cri.go:89] found id: ""
	I1217 01:33:37.967957 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:37.968032 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.971797 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.975432 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:37.975510 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:38.007766 1225677 cri.go:89] found id: ""
	I1217 01:33:38.007790 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.007798 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:38.007805 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:38.007864 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:38.037473 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.037495 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.037503 1225677 cri.go:89] found id: ""
	I1217 01:33:38.037511 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:38.037566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.041569 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.045417 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:38.045524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:38.073829 1225677 cri.go:89] found id: ""
	I1217 01:33:38.073851 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.073860 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:38.073870 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:38.073882 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:38.093728 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:38.093764 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:38.176670 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:38.176690 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:38.176703 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:38.211414 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:38.211443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:38.263725 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:38.263761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:38.309151 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:38.309186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:38.338107 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:38.338143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.369538 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:38.369566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:38.449918 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:38.449954 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:38.542249 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:38.542288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:38.612539 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:38.612617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.642932 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:38.643015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:41.175028 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:41.186849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:41.186921 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:41.230880 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.230955 1225677 cri.go:89] found id: ""
	I1217 01:33:41.230992 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:41.231084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.235480 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:41.235641 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:41.266906 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.266980 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.267014 1225677 cri.go:89] found id: ""
	I1217 01:33:41.267040 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:41.267127 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.271136 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.275105 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:41.275225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:41.306499 1225677 cri.go:89] found id: ""
	I1217 01:33:41.306580 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.306603 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:41.306624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:41.306737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:41.333549 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.333575 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.333580 1225677 cri.go:89] found id: ""
	I1217 01:33:41.333589 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:41.333643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.337497 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.341450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:41.341531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:41.368976 1225677 cri.go:89] found id: ""
	I1217 01:33:41.369004 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.369014 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:41.369020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:41.369082 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:41.397520 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.397583 1225677 cri.go:89] found id: ""
	I1217 01:33:41.397607 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:41.397684 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.401528 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:41.401607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:41.427395 1225677 cri.go:89] found id: ""
	I1217 01:33:41.427423 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.427434 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:41.427444 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:41.427463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:41.525514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:41.525559 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:41.551264 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:41.551299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:41.625083 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:41.625123 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:41.625147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.702454 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:41.702490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.735107 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:41.735134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.769228 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:41.769269 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.799696 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:41.799725 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.848171 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:41.848207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.933395 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:41.933446 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:42.025408 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:42.025452 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:44.562646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:44.573393 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:44.573486 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:44.600868 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:44.600895 1225677 cri.go:89] found id: ""
	I1217 01:33:44.600906 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:44.600983 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.604710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:44.604780 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:44.632082 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:44.632158 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:44.632187 1225677 cri.go:89] found id: ""
	I1217 01:33:44.632208 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:44.632294 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.636315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.640212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:44.640285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:44.669382 1225677 cri.go:89] found id: ""
	I1217 01:33:44.669404 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.669413 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:44.669419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:44.669480 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:44.699713 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:44.699732 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.699737 1225677 cri.go:89] found id: ""
	I1217 01:33:44.699747 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:44.699801 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.703608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.707118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:44.707191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:44.733881 1225677 cri.go:89] found id: ""
	I1217 01:33:44.733905 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.733914 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:44.733921 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:44.733983 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:44.761418 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:44.761440 1225677 cri.go:89] found id: ""
	I1217 01:33:44.761449 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:44.761507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.765368 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:44.765451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:44.797562 1225677 cri.go:89] found id: ""
	I1217 01:33:44.797587 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.797595 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:44.797605 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:44.797617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.824683 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:44.824716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:44.935133 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:44.935177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:44.954652 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:44.954684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:45.015678 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:45.015775 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:45.189553 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:45.191524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:45.273264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:45.273306 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:45.371974 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:45.372013 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:45.409119 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:45.409149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:45.483606 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:45.483631 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:45.483645 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:45.511796 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:45.511826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.069605 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:48.081402 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:48.081501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:48.113467 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.113487 1225677 cri.go:89] found id: ""
	I1217 01:33:48.113496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:48.113554 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.123702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:48.123830 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:48.152225 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.152299 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.152320 1225677 cri.go:89] found id: ""
	I1217 01:33:48.152346 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:48.152452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.156596 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.160848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:48.160930 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:48.192903 1225677 cri.go:89] found id: ""
	I1217 01:33:48.192934 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.192944 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:48.192951 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:48.193016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:48.223459 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.223483 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.223489 1225677 cri.go:89] found id: ""
	I1217 01:33:48.223496 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:48.223577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.228708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.233033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:48.233131 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:48.264313 1225677 cri.go:89] found id: ""
	I1217 01:33:48.264339 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.264348 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:48.264355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:48.264430 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:48.292891 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.292963 1225677 cri.go:89] found id: ""
	I1217 01:33:48.292986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:48.293068 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.297013 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:48.297089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:48.324697 1225677 cri.go:89] found id: ""
	I1217 01:33:48.324724 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.324734 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:48.324743 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:48.324755 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:48.343285 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:48.343318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.401079 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:48.401121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.445651 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:48.445685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.487906 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:48.487936 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.520261 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:48.520288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:48.612095 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:48.612132 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:48.686505 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:48.686528 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:48.686545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.715518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:48.715549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.780723 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:48.780758 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:48.813883 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:48.813910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.424534 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:51.435019 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:51.435089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:51.461515 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.461539 1225677 cri.go:89] found id: ""
	I1217 01:33:51.461549 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:51.461610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.465697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:51.465778 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:51.494232 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.494254 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:51.494260 1225677 cri.go:89] found id: ""
	I1217 01:33:51.494267 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:51.494342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.498178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.501847 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:51.501920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:51.533242 1225677 cri.go:89] found id: ""
	I1217 01:33:51.533267 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.533277 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:51.533283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:51.533356 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:51.559915 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.559937 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:51.559942 1225677 cri.go:89] found id: ""
	I1217 01:33:51.559950 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:51.560017 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.563739 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.567426 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:51.567506 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:51.598933 1225677 cri.go:89] found id: ""
	I1217 01:33:51.598958 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.598978 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:51.598985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:51.599043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:51.628013 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:51.628085 1225677 cri.go:89] found id: ""
	I1217 01:33:51.628107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:51.628195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.632081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:51.632153 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:51.664059 1225677 cri.go:89] found id: ""
	I1217 01:33:51.664095 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.664104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:51.664114 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:51.664127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.703117 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:51.703141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.746864 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:51.746901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.813259 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:51.813294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:51.890408 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:51.890448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.996243 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:51.996281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:52.078355 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:52.078385 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:52.078399 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:52.124157 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:52.124201 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:52.158325 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:52.158406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:52.194882 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:52.194917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:52.236180 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:52.236223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:54.755766 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:54.766584 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:54.766659 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:54.794813 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:54.794834 1225677 cri.go:89] found id: ""
	I1217 01:33:54.794844 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:54.794900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.798697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:54.798816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:54.830345 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:54.830368 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:54.830374 1225677 cri.go:89] found id: ""
	I1217 01:33:54.830381 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:54.830437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.834212 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.837869 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:54.837958 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:54.865687 1225677 cri.go:89] found id: ""
	I1217 01:33:54.865710 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.865720 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:54.865726 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:54.865784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:54.893199 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:54.893222 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:54.893228 1225677 cri.go:89] found id: ""
	I1217 01:33:54.893236 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:54.893300 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.897296 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.901035 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:54.901109 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:54.935123 1225677 cri.go:89] found id: ""
	I1217 01:33:54.935150 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.935160 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:54.935165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:54.935227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:54.960828 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:54.960908 1225677 cri.go:89] found id: ""
	I1217 01:33:54.960925 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:54.960994 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.965788 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:54.965858 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:54.996816 1225677 cri.go:89] found id: ""
	I1217 01:33:54.996844 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.996854 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:54.996864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:54.996877 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:55.049187 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:55.049226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:55.122184 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:55.122224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:55.149525 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:55.149555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:55.259828 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:55.259866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:55.286876 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:55.286905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:55.332115 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:55.332149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:55.359308 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:55.359340 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:55.444861 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:55.444901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:55.492994 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:55.493026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:55.512281 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:55.512312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:55.587576 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.089262 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:58.101573 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:58.101658 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:58.137991 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:58.138015 1225677 cri.go:89] found id: ""
	I1217 01:33:58.138024 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:58.138084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.142504 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:58.142579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:58.172313 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.172337 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.172343 1225677 cri.go:89] found id: ""
	I1217 01:33:58.172350 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:58.172446 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.176396 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.180282 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:58.180366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:58.211138 1225677 cri.go:89] found id: ""
	I1217 01:33:58.211171 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.211181 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:58.211193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:58.211257 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:58.243736 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.243759 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.243764 1225677 cri.go:89] found id: ""
	I1217 01:33:58.243773 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:58.243830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.247791 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.251576 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:58.251655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:58.288139 1225677 cri.go:89] found id: ""
	I1217 01:33:58.288173 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.288184 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:58.288193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:58.288255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:58.317667 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.317690 1225677 cri.go:89] found id: ""
	I1217 01:33:58.317700 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:58.317763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.321820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:58.321906 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:58.350850 1225677 cri.go:89] found id: ""
	I1217 01:33:58.350878 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.350888 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:58.350897 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:58.350910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.416830 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:58.416867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.444837 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:58.444868 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:58.528215 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:58.528263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:58.575846 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:58.575880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:58.595772 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:58.595807 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.650340 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:58.650375 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.701278 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:58.701316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.732779 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:58.732810 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:58.835274 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:58.835310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:58.910122 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.910207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:58.910236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.438103 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:01.448838 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:01.448920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:01.479627 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.479651 1225677 cri.go:89] found id: ""
	I1217 01:34:01.479678 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:01.479736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.483564 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:01.483634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:01.510339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.510364 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.510370 1225677 cri.go:89] found id: ""
	I1217 01:34:01.510378 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:01.510435 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.514437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.519025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:01.519139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:01.547434 1225677 cri.go:89] found id: ""
	I1217 01:34:01.547457 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.547466 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:01.547473 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:01.547530 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:01.574487 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.574508 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.574513 1225677 cri.go:89] found id: ""
	I1217 01:34:01.574520 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:01.574577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.578139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.581545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:01.581626 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:01.609342 1225677 cri.go:89] found id: ""
	I1217 01:34:01.609365 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.609374 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:01.609381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:01.609439 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:01.636506 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:01.636530 1225677 cri.go:89] found id: ""
	I1217 01:34:01.636540 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:01.636602 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.640274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:01.640388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:01.669875 1225677 cri.go:89] found id: ""
	I1217 01:34:01.669944 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.669969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:01.669993 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:01.670033 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.710653 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:01.710691 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.763990 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:01.764028 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.833068 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:01.833107 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.863940 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:01.864023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:01.967213 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:01.967254 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:01.992938 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:01.992972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:02.024381 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:02.024443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:02.106857 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:02.106896 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:02.143612 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:02.143646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:02.213706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:02.213729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:02.213742 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.741826 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:04.752958 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:04.753026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:04.783743 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.783762 1225677 cri.go:89] found id: ""
	I1217 01:34:04.783770 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:04.784150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.788287 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:04.788359 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:04.817040 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:04.817073 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:04.817079 1225677 cri.go:89] found id: ""
	I1217 01:34:04.817086 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:04.817147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.821094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.825495 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:04.825571 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:04.853100 1225677 cri.go:89] found id: ""
	I1217 01:34:04.853124 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.853133 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:04.853140 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:04.853202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:04.881403 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:04.881425 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:04.881430 1225677 cri.go:89] found id: ""
	I1217 01:34:04.881438 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:04.881502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.885516 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.889230 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:04.889353 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:04.915187 1225677 cri.go:89] found id: ""
	I1217 01:34:04.915219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.915229 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:04.915235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:04.915296 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:04.946769 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:04.946802 1225677 cri.go:89] found id: ""
	I1217 01:34:04.946811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:04.946884 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.951231 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:04.951339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:04.978082 1225677 cri.go:89] found id: ""
	I1217 01:34:04.978110 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.978120 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:04.978128 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:04.978166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:05.019076 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:05.019109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:05.101083 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:05.101161 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:05.177848 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:05.177870 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:05.177884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:05.204143 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:05.204172 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:05.268231 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:05.268268 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:05.297025 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:05.297054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:05.327881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:05.327911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:05.437319 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:05.437360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:05.456847 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:05.456883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:05.498209 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:05.498242 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.077748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:08.088818 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:08.088890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:08.126181 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.126213 1225677 cri.go:89] found id: ""
	I1217 01:34:08.126227 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:08.126292 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.131226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:08.131346 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:08.160808 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.160832 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.160837 1225677 cri.go:89] found id: ""
	I1217 01:34:08.160846 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:08.160923 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.166045 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.170405 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:08.170497 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:08.200928 1225677 cri.go:89] found id: ""
	I1217 01:34:08.200954 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.200964 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:08.200970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:08.201068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:08.237681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.237706 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.237711 1225677 cri.go:89] found id: ""
	I1217 01:34:08.237719 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:08.237794 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.241696 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.245486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:08.245561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:08.272543 1225677 cri.go:89] found id: ""
	I1217 01:34:08.272572 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.272582 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:08.272594 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:08.272676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:08.304603 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.304627 1225677 cri.go:89] found id: ""
	I1217 01:34:08.304635 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:08.304690 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.308617 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:08.308691 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:08.338781 1225677 cri.go:89] found id: ""
	I1217 01:34:08.338809 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.338818 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:08.338827 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:08.338839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:08.374627 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:08.374660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:08.472485 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:08.472523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:08.490991 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:08.491026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.574253 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:08.574292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.602049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:08.602118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:08.681328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:08.681348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:08.681361 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.708974 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:08.709000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.761284 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:08.761320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.819965 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:08.820006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.850377 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:08.850405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:11.432699 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:11.444142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:11.444218 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:11.477380 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.477404 1225677 cri.go:89] found id: ""
	I1217 01:34:11.477414 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:11.477475 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.481941 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:11.482014 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:11.510503 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.510529 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.510546 1225677 cri.go:89] found id: ""
	I1217 01:34:11.510554 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:11.510650 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.514842 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.518923 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:11.519013 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:11.546962 1225677 cri.go:89] found id: ""
	I1217 01:34:11.546990 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.547000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:11.547006 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:11.547080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:11.574757 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:11.574782 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:11.574787 1225677 cri.go:89] found id: ""
	I1217 01:34:11.574796 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:11.574877 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.579088 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.583273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:11.583402 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:11.613215 1225677 cri.go:89] found id: ""
	I1217 01:34:11.613244 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.613254 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:11.613261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:11.613326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:11.642127 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:11.642166 1225677 cri.go:89] found id: ""
	I1217 01:34:11.642175 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:11.642249 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.646180 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:11.646281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:11.676821 1225677 cri.go:89] found id: ""
	I1217 01:34:11.676848 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.676858 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:11.676868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:11.676880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:11.776881 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:11.776922 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:11.797665 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:11.797700 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:11.873871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:11.873895 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:11.873909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.901431 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:11.901461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.946983 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:11.947021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.993263 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:11.993299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:12.069104 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:12.069143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:12.101484 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:12.101511 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:12.137373 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:12.137404 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:12.219779 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:12.219833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:14.749747 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:14.760900 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:14.760971 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:14.789422 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:14.789504 1225677 cri.go:89] found id: ""
	I1217 01:34:14.789520 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:14.789579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.794016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:14.794094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:14.820779 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:14.820802 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:14.820808 1225677 cri.go:89] found id: ""
	I1217 01:34:14.820815 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:14.820892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.824759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.828502 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:14.828620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:14.855015 1225677 cri.go:89] found id: ""
	I1217 01:34:14.855042 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.855051 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:14.855058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:14.855118 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:14.882554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:14.882580 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:14.882586 1225677 cri.go:89] found id: ""
	I1217 01:34:14.882594 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:14.882649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.886723 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.890383 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:14.890487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:14.921014 1225677 cri.go:89] found id: ""
	I1217 01:34:14.921051 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.921077 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:14.921096 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:14.921186 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:14.950121 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:14.950151 1225677 cri.go:89] found id: ""
	I1217 01:34:14.950160 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:14.950235 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.954391 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:14.954491 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:14.981305 1225677 cri.go:89] found id: ""
	I1217 01:34:14.981381 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.981396 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:14.981406 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:14.981424 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:15.082515 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:15.082601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:15.115676 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:15.115766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:15.207150 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:15.207196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:15.253067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:15.253103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:15.282406 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:15.282434 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:15.332186 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:15.332232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:15.383617 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:15.383653 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:15.413724 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:15.413761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:15.512500 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:15.512539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:15.531712 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:15.531744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:15.607024 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.107382 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:18.125209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:18.125300 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:18.154715 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.154743 1225677 cri.go:89] found id: ""
	I1217 01:34:18.154759 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:18.154827 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.158989 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:18.159058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:18.186887 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.186906 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.186910 1225677 cri.go:89] found id: ""
	I1217 01:34:18.186918 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:18.186974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.191114 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.195016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:18.195088 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:18.230496 1225677 cri.go:89] found id: ""
	I1217 01:34:18.230522 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.230532 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:18.230541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:18.230603 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:18.257433 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.257453 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.257458 1225677 cri.go:89] found id: ""
	I1217 01:34:18.257466 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:18.257522 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.261223 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.264998 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:18.265077 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:18.298281 1225677 cri.go:89] found id: ""
	I1217 01:34:18.298359 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.298373 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:18.298381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:18.298438 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:18.326008 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:18.326029 1225677 cri.go:89] found id: ""
	I1217 01:34:18.326038 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:18.326094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.329952 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:18.330026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:18.355880 1225677 cri.go:89] found id: ""
	I1217 01:34:18.355914 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.355924 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:18.355956 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:18.355971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:18.430677 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:18.430716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:18.461146 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:18.461178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:18.483944 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:18.483976 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:18.558884 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.558914 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:18.558930 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.631593 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:18.631631 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.661399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:18.661431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:18.765933 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:18.765971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.798005 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:18.798035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.838207 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:18.838245 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.879939 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:18.879973 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.409362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:21.420285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:21.420355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:21.450399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:21.450424 1225677 cri.go:89] found id: ""
	I1217 01:34:21.450433 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:21.450488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.454541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:21.454613 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:21.484061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.484086 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:21.484091 1225677 cri.go:89] found id: ""
	I1217 01:34:21.484099 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:21.484156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.488024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.491648 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:21.491718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:21.522026 1225677 cri.go:89] found id: ""
	I1217 01:34:21.522052 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.522062 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:21.522071 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:21.522139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:21.554855 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.554887 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:21.554894 1225677 cri.go:89] found id: ""
	I1217 01:34:21.554902 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:21.554955 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.558520 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.562302 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:21.562407 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:21.590541 1225677 cri.go:89] found id: ""
	I1217 01:34:21.590564 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.590574 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:21.590580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:21.590636 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:21.626269 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.626340 1225677 cri.go:89] found id: ""
	I1217 01:34:21.626366 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:21.626428 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.630350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:21.630464 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:21.666471 1225677 cri.go:89] found id: ""
	I1217 01:34:21.666498 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.666507 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:21.666516 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:21.666533 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.706780 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:21.706815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.774693 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:21.774729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:21.861669 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:21.861713 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:21.977061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:21.977096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:22.003122 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:22.003171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:22.051916 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:22.051957 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:22.082713 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:22.082746 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:22.116010 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:22.116037 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:22.146809 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:22.146848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:22.228639 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:22.228703 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:22.228732 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.754744 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:24.765436 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:24.765518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:24.794628 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.794658 1225677 cri.go:89] found id: ""
	I1217 01:34:24.794667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:24.794732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.798378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:24.798454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:24.832756 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:24.832781 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:24.832787 1225677 cri.go:89] found id: ""
	I1217 01:34:24.832794 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:24.832850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.836854 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.840412 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:24.840572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:24.868168 1225677 cri.go:89] found id: ""
	I1217 01:34:24.868247 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.868270 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:24.868290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:24.868381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:24.899805 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:24.899825 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:24.899830 1225677 cri.go:89] found id: ""
	I1217 01:34:24.899838 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:24.899893 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.903464 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.906950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:24.907067 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:24.935718 1225677 cri.go:89] found id: ""
	I1217 01:34:24.935744 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.935753 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:24.935760 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:24.935818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:24.967779 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:24.967802 1225677 cri.go:89] found id: ""
	I1217 01:34:24.967811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:24.967863 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.971468 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:24.971534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:25.001724 1225677 cri.go:89] found id: ""
	I1217 01:34:25.001815 1225677 logs.go:282] 0 containers: []
	W1217 01:34:25.001842 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:25.001890 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:25.001925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:25.023512 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:25.023709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:25.051815 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:25.051848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:25.099451 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:25.099487 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:25.141801 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:25.141832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:25.178412 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:25.178444 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:25.285631 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:25.285667 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:25.362578 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:25.362602 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:25.362617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:25.403014 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:25.403050 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:25.510336 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:25.510395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:25.543551 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:25.543582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.129531 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:28.140763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:28.140832 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:28.184591 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.184616 1225677 cri.go:89] found id: ""
	I1217 01:34:28.184624 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:28.184707 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.188557 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:28.188634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:28.222629 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.222651 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.222656 1225677 cri.go:89] found id: ""
	I1217 01:34:28.222664 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:28.222724 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.226610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.230481 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:28.230575 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:28.257099 1225677 cri.go:89] found id: ""
	I1217 01:34:28.257126 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.257135 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:28.257142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:28.257220 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:28.291310 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:28.291347 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.291354 1225677 cri.go:89] found id: ""
	I1217 01:34:28.291388 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:28.291469 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.295342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.298970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:28.299075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:28.329122 1225677 cri.go:89] found id: ""
	I1217 01:34:28.329146 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.329155 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:28.329182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:28.329254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:28.359713 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.359736 1225677 cri.go:89] found id: ""
	I1217 01:34:28.359745 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:28.359803 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.363561 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:28.363633 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:28.397883 1225677 cri.go:89] found id: ""
	I1217 01:34:28.397910 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.397920 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:28.397929 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:28.397941 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.431945 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:28.431974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:28.482268 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:28.482300 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.509035 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:28.509067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.557586 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:28.557623 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.616155 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:28.616203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.647557 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:28.647590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.723102 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:28.723139 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:28.830255 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:28.830293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:28.849322 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:28.849355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:28.919883 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:28.919905 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:28.919926 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.492801 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:31.504000 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:31.504075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:31.539143 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.539163 1225677 cri.go:89] found id: ""
	I1217 01:34:31.539173 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:31.539228 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.543277 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:31.543355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:31.573251 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:31.573271 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:31.573275 1225677 cri.go:89] found id: ""
	I1217 01:34:31.573284 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:31.573337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.577458 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.581377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:31.581451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:31.612241 1225677 cri.go:89] found id: ""
	I1217 01:34:31.612270 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.612280 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:31.612286 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:31.612345 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:31.643539 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.643563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.643569 1225677 cri.go:89] found id: ""
	I1217 01:34:31.643578 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:31.643638 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.647841 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.651771 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:31.651855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:31.685384 1225677 cri.go:89] found id: ""
	I1217 01:34:31.685409 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.685418 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:31.685425 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:31.685487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:31.713458 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.713491 1225677 cri.go:89] found id: ""
	I1217 01:34:31.713501 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:31.713571 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.717510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:31.717598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:31.742954 1225677 cri.go:89] found id: ""
	I1217 01:34:31.742979 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.742989 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:31.742998 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:31.743030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:31.826689 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:31.826712 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:31.826726 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.858359 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:31.858389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.890466 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:31.890494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.920394 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:31.920516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:31.954114 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:31.954143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:32.048397 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:32.048463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:32.068978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:32.069014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:32.126891 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:32.126931 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:32.194493 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:32.194531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:32.278811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:32.278854 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:34.866004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:34.876932 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:34.877040 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:34.904525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:34.904548 1225677 cri.go:89] found id: ""
	I1217 01:34:34.904556 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:34.904634 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.908290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:34.908388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:34.937927 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:34.937962 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:34.937967 1225677 cri.go:89] found id: ""
	I1217 01:34:34.937975 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:34.938053 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.941844 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.945447 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:34.945529 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:34.974834 1225677 cri.go:89] found id: ""
	I1217 01:34:34.974860 1225677 logs.go:282] 0 containers: []
	W1217 01:34:34.974870 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:34.974876 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:34.974932 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:35.015100 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.015121 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.015126 1225677 cri.go:89] found id: ""
	I1217 01:34:35.015134 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:35.015196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.019378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.023124 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:35.023202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:35.055461 1225677 cri.go:89] found id: ""
	I1217 01:34:35.055488 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.055497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:35.055503 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:35.055561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:35.083009 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.083083 1225677 cri.go:89] found id: ""
	I1217 01:34:35.083107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:35.083195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.087719 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:35.087788 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:35.115588 1225677 cri.go:89] found id: ""
	I1217 01:34:35.115615 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.115625 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:35.115649 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:35.115664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:35.165942 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:35.165978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.194775 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:35.194803 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:35.291776 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:35.291811 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:35.338079 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:35.338110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:35.357793 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:35.357824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:35.428871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:35.428893 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:35.428905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.499513 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:35.499548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.540136 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:35.540211 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:35.636873 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:35.636913 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:35.665818 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:35.665889 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.220553 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:38.231749 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:38.231823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:38.259479 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.259500 1225677 cri.go:89] found id: ""
	I1217 01:34:38.259509 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:38.259568 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.263241 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:38.263385 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:38.295256 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.295292 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.295301 1225677 cri.go:89] found id: ""
	I1217 01:34:38.295310 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:38.295378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.300468 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.305174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:38.305294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:38.339161 1225677 cri.go:89] found id: ""
	I1217 01:34:38.339194 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.339204 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:38.339210 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:38.339275 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:38.367494 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.367518 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:38.367524 1225677 cri.go:89] found id: ""
	I1217 01:34:38.367531 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:38.367608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.371441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.375084 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:38.375191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:38.401755 1225677 cri.go:89] found id: ""
	I1217 01:34:38.401784 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.401795 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:38.401801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:38.401890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:38.429928 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.429962 1225677 cri.go:89] found id: ""
	I1217 01:34:38.429971 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:38.430044 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.433894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:38.433965 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:38.461088 1225677 cri.go:89] found id: ""
	I1217 01:34:38.461114 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.461124 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:38.461133 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:38.461144 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:38.544237 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:38.544274 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.574281 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:38.574312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.620093 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:38.620131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.674826 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:38.674902 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.752562 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:38.752603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.781494 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:38.781527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:38.833674 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:38.833706 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:38.933793 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:38.933832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:38.953733 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:38.953782 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:39.029298 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:39.029322 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:39.029336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.557003 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:41.568311 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:41.568412 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:41.601070 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:41.601089 1225677 cri.go:89] found id: ""
	I1217 01:34:41.601097 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:41.601156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.605150 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:41.605227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:41.633863 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:41.633887 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:41.633893 1225677 cri.go:89] found id: ""
	I1217 01:34:41.633901 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:41.633958 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.638555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.644087 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:41.644168 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:41.684237 1225677 cri.go:89] found id: ""
	I1217 01:34:41.684276 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.684287 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:41.684294 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:41.684371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:41.717925 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:41.717993 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.718016 1225677 cri.go:89] found id: ""
	I1217 01:34:41.718032 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:41.718109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.722478 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.726529 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:41.726607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:41.754525 1225677 cri.go:89] found id: ""
	I1217 01:34:41.754552 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.754562 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:41.754571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:41.754673 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:41.784794 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:41.784860 1225677 cri.go:89] found id: ""
	I1217 01:34:41.784883 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:41.784969 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.788882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:41.788980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:41.825117 1225677 cri.go:89] found id: ""
	I1217 01:34:41.825193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.825216 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:41.825233 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:41.825259 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:41.934154 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:41.934191 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:41.955231 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:41.955263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:42.023779 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:42.023819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:42.054183 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:42.054218 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:42.146898 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:42.147005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:42.249519 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:42.249543 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:42.249557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:42.280803 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:42.280833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:42.327682 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:42.327731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:42.373795 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:42.373832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:42.415409 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:42.415437 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:44.951197 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:44.962939 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:44.963016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:44.996268 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:44.996297 1225677 cri.go:89] found id: ""
	I1217 01:34:44.996306 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:44.996365 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.016281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:45.016367 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:45.152354 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.152375 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.152380 1225677 cri.go:89] found id: ""
	I1217 01:34:45.152389 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:45.152473 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.161519 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.169793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:45.169869 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:45.269649 1225677 cri.go:89] found id: ""
	I1217 01:34:45.269685 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.269696 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:45.269715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:45.269816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:45.322137 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.322210 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:45.322250 1225677 cri.go:89] found id: ""
	I1217 01:34:45.322320 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:45.322406 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.327229 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.331531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:45.331703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:45.362501 1225677 cri.go:89] found id: ""
	I1217 01:34:45.362571 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.362602 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:45.362624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:45.362696 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:45.394160 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.394240 1225677 cri.go:89] found id: ""
	I1217 01:34:45.394258 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:45.394335 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.398315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:45.398397 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:45.426737 1225677 cri.go:89] found id: ""
	I1217 01:34:45.426780 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.426790 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:45.426819 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:45.426839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:45.503383 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:45.503464 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:45.503485 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:45.535637 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:45.535672 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.583362 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:45.583398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.613182 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:45.613214 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:45.695579 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:45.695626 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:45.729534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:45.729563 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:45.826222 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:45.826262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:45.846157 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:45.846195 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.911389 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:45.911426 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.983046 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:45.983084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.519530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:48.530493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:48.530565 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:48.560366 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:48.560471 1225677 cri.go:89] found id: ""
	I1217 01:34:48.560496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:48.560585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.564848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:48.564920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:48.593560 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.593628 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:48.593666 1225677 cri.go:89] found id: ""
	I1217 01:34:48.593696 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:48.593783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.597895 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.601634 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:48.601718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:48.631022 1225677 cri.go:89] found id: ""
	I1217 01:34:48.631048 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.631057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:48.631064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:48.631122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:48.656804 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:48.656829 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.656834 1225677 cri.go:89] found id: ""
	I1217 01:34:48.656841 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:48.656898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.660979 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.664698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:48.664770 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:48.692344 1225677 cri.go:89] found id: ""
	I1217 01:34:48.692372 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.692383 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:48.692389 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:48.692481 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:48.721997 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:48.722020 1225677 cri.go:89] found id: ""
	I1217 01:34:48.722029 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:48.722111 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.726120 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:48.726247 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:48.753313 1225677 cri.go:89] found id: ""
	I1217 01:34:48.753339 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.753349 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:48.753358 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:48.753388 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:48.849435 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:48.849474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:48.870486 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:48.870523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:48.943874 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:48.943904 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:48.943919 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.991171 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:48.991205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:49.020622 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:49.020649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:49.064904 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:49.064942 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:49.143148 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:49.143186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:49.174999 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:49.175086 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:49.209127 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:49.209156 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:49.296275 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:49.296325 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:51.840412 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:51.851134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:51.851204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:51.880791 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:51.880811 1225677 cri.go:89] found id: ""
	I1217 01:34:51.880820 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:51.880879 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.884883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:51.884962 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:51.911511 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:51.911535 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:51.911541 1225677 cri.go:89] found id: ""
	I1217 01:34:51.911549 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:51.911607 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.915352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.918918 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:51.918986 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:51.950127 1225677 cri.go:89] found id: ""
	I1217 01:34:51.950152 1225677 logs.go:282] 0 containers: []
	W1217 01:34:51.950163 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:51.950169 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:51.950266 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:51.978696 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:51.978725 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:51.978731 1225677 cri.go:89] found id: ""
	I1217 01:34:51.978738 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:51.978795 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.982736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.986411 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:51.986482 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:52.016886 1225677 cri.go:89] found id: ""
	I1217 01:34:52.016911 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.016920 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:52.016926 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:52.016989 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:52.045870 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.045895 1225677 cri.go:89] found id: ""
	I1217 01:34:52.045904 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:52.045962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:52.049906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:52.049977 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:52.077565 1225677 cri.go:89] found id: ""
	I1217 01:34:52.077592 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.077604 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:52.077614 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:52.077646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:52.105176 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:52.105205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:52.211964 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:52.211999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:52.252350 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:52.252382 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:52.306053 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:52.306088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:52.376262 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:52.376302 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:52.403480 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:52.403508 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.431952 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:52.431983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:52.510953 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:52.510990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:52.555450 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:52.555482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:52.574086 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:52.574119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:52.644412 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.144646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:55.155615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:55.155693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:55.184697 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.184716 1225677 cri.go:89] found id: ""
	I1217 01:34:55.184724 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:55.184781 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.188462 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:55.188538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:55.217937 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.217961 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.217966 1225677 cri.go:89] found id: ""
	I1217 01:34:55.217974 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:55.218030 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.221924 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.226643 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:55.226714 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:55.254617 1225677 cri.go:89] found id: ""
	I1217 01:34:55.254645 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.254655 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:55.254662 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:55.254721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:55.282393 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.282419 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.282424 1225677 cri.go:89] found id: ""
	I1217 01:34:55.282432 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:55.282485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.286357 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.289912 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:55.289992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:55.316252 1225677 cri.go:89] found id: ""
	I1217 01:34:55.316278 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.316288 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:55.316295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:55.316368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:55.343249 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.343314 1225677 cri.go:89] found id: ""
	I1217 01:34:55.343337 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:55.343433 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.347319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:55.347448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:55.381545 1225677 cri.go:89] found id: ""
	I1217 01:34:55.381629 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.381645 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:55.381656 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:55.381669 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.421981 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:55.422014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.453301 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:55.453342 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.480646 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:55.480687 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:55.570826 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.570849 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:55.570863 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.599216 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:55.599257 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.658218 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:55.658310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.745919 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:55.745955 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:55.838064 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:55.838101 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:55.888374 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:55.888405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:55.996293 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:55.996331 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:58.522397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:58.536202 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:58.536271 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:58.566870 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:58.566965 1225677 cri.go:89] found id: ""
	I1217 01:34:58.566994 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:58.567139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.571283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:58.571363 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:58.598180 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:58.598208 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.598213 1225677 cri.go:89] found id: ""
	I1217 01:34:58.598222 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:58.598297 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.602201 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.605913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:58.605997 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:58.636167 1225677 cri.go:89] found id: ""
	I1217 01:34:58.636193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.636202 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:58.636209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:58.636270 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:58.662111 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:58.662135 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:58.662140 1225677 cri.go:89] found id: ""
	I1217 01:34:58.662148 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:58.662209 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.666315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.670253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:58.670348 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:58.696144 1225677 cri.go:89] found id: ""
	I1217 01:34:58.696219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.696244 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:58.696265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:58.696347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:58.726742 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.726767 1225677 cri.go:89] found id: ""
	I1217 01:34:58.726776 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:58.726832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.730710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:58.730785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:58.759394 1225677 cri.go:89] found id: ""
	I1217 01:34:58.759421 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.759431 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:58.759440 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:58.759454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.817531 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:58.817569 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.847360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:58.847389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:58.929741 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:58.929776 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:58.968951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:58.968982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:59.043218 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:59.043239 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:59.043255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:59.070405 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:59.070431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:59.146784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:59.146829 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:59.179445 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:59.179479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:59.286441 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:59.286479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:59.308412 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:59.308540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.850397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:01.863234 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:01.863368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:01.898442 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:01.898473 1225677 cri.go:89] found id: ""
	I1217 01:35:01.898484 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:01.898577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.903064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:01.903142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:01.936524 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.936547 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:01.936551 1225677 cri.go:89] found id: ""
	I1217 01:35:01.936559 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:01.936625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.942865 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.947963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:01.948071 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:01.979359 1225677 cri.go:89] found id: ""
	I1217 01:35:01.979384 1225677 logs.go:282] 0 containers: []
	W1217 01:35:01.979393 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:01.979399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:01.979466 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:02.012882 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.012925 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.012931 1225677 cri.go:89] found id: ""
	I1217 01:35:02.012975 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:02.013055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.017605 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.021797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:02.021870 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:02.049550 1225677 cri.go:89] found id: ""
	I1217 01:35:02.049621 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.049638 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:02.049646 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:02.049722 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:02.081301 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.081326 1225677 cri.go:89] found id: ""
	I1217 01:35:02.081335 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:02.081392 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.086118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:02.086210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:02.125352 1225677 cri.go:89] found id: ""
	I1217 01:35:02.125374 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.125383 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:02.125393 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:02.125405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:02.197255 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:02.197318 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:02.197355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:02.226446 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:02.226488 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:02.271257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:02.271293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:02.314955 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:02.314988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.386430 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:02.386468 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.417607 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:02.417682 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.449011 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:02.449041 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:02.551859 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:02.551899 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:02.571928 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:02.571960 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:02.659356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:02.659395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:05.190765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:05.203695 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:05.203771 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:05.238686 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.238707 1225677 cri.go:89] found id: ""
	I1217 01:35:05.238716 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:05.238778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.242613 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:05.242687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:05.272627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.272661 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.272667 1225677 cri.go:89] found id: ""
	I1217 01:35:05.272675 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:05.272757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.277184 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.281337 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:05.281414 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:05.309340 1225677 cri.go:89] found id: ""
	I1217 01:35:05.309361 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.309370 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:05.309377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:05.309437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:05.342268 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.342294 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.342300 1225677 cri.go:89] found id: ""
	I1217 01:35:05.342308 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:05.342394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.346668 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.350724 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:05.350805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:05.378257 1225677 cri.go:89] found id: ""
	I1217 01:35:05.378289 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.378298 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:05.378305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:05.378366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:05.406348 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.406370 1225677 cri.go:89] found id: ""
	I1217 01:35:05.406379 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:05.406455 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.410653 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:05.410724 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:05.441777 1225677 cri.go:89] found id: ""
	I1217 01:35:05.441802 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.441812 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:05.441820 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:05.441832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:05.521081 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:05.521113 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:05.521127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.559491 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:05.559525 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.608690 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:05.608727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.640635 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:05.640666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:05.720771 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:05.720808 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:05.824388 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:05.824427 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.864839 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:05.864871 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.960476 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:05.960520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.992555 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:05.992588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:06.045891 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:06.045925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:08.568611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:08.579598 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:08.579681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:08.607399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.607421 1225677 cri.go:89] found id: ""
	I1217 01:35:08.607430 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:08.607485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.611906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:08.611982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:08.638447 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.638470 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:08.638476 1225677 cri.go:89] found id: ""
	I1217 01:35:08.638484 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:08.638558 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.642337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.646066 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:08.646162 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:08.673000 1225677 cri.go:89] found id: ""
	I1217 01:35:08.673026 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.673036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:08.673042 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:08.673135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:08.701768 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:08.701792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:08.701798 1225677 cri.go:89] found id: ""
	I1217 01:35:08.701806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:08.701892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.705733 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.709545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:08.709620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:08.736283 1225677 cri.go:89] found id: ""
	I1217 01:35:08.736309 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.736319 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:08.736325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:08.736383 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:08.763589 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:08.763610 1225677 cri.go:89] found id: ""
	I1217 01:35:08.763618 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:08.763679 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.768008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:08.768157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:08.794921 1225677 cri.go:89] found id: ""
	I1217 01:35:08.794948 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.794957 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:08.794967 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:08.795003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:08.866335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:08.866356 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:08.866371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.894862 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:08.894894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.945712 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:08.945749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:09.030175 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:09.030213 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:09.057626 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:09.057656 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:09.140070 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:09.140109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:09.249646 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:09.249685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:09.269874 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:09.269906 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:09.317090 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:09.317126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:09.346482 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:09.346513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:11.877651 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:11.889575 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:11.889645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:11.917211 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:11.917234 1225677 cri.go:89] found id: ""
	I1217 01:35:11.917243 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:11.917309 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.921144 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:11.921223 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:11.955516 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:11.955536 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:11.955541 1225677 cri.go:89] found id: ""
	I1217 01:35:11.955548 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:11.955604 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.959308 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.962862 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:11.962933 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:11.991261 1225677 cri.go:89] found id: ""
	I1217 01:35:11.991284 1225677 logs.go:282] 0 containers: []
	W1217 01:35:11.991293 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:11.991299 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:11.991366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:12.023452 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.023477 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.023483 1225677 cri.go:89] found id: ""
	I1217 01:35:12.023491 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:12.023581 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.027715 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.031641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:12.031751 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:12.059135 1225677 cri.go:89] found id: ""
	I1217 01:35:12.059211 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.059234 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:12.059255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:12.059343 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:12.092809 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.092830 1225677 cri.go:89] found id: ""
	I1217 01:35:12.092839 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:12.092915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.096814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:12.096963 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:12.132911 1225677 cri.go:89] found id: ""
	I1217 01:35:12.132936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.132946 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:12.132955 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:12.132966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:12.235310 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:12.235346 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:12.255554 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:12.255587 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:12.303522 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:12.303560 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.374998 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:12.375032 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:12.461333 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:12.461371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:12.547450 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:12.547475 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:12.547489 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:12.574864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:12.574892 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:12.619775 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:12.619816 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.649040 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:12.649123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.677296 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:12.677326 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.212228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:15.225138 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:15.225215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:15.259192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.259218 1225677 cri.go:89] found id: ""
	I1217 01:35:15.259228 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:15.259287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.263205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:15.263279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:15.290493 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.290516 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.290521 1225677 cri.go:89] found id: ""
	I1217 01:35:15.290529 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:15.290588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.294490 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.298107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:15.298208 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:15.325021 1225677 cri.go:89] found id: ""
	I1217 01:35:15.325047 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.325057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:15.325063 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:15.325125 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:15.353712 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.353744 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.353750 1225677 cri.go:89] found id: ""
	I1217 01:35:15.353758 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:15.353828 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.357883 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.361729 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:15.361817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:15.389342 1225677 cri.go:89] found id: ""
	I1217 01:35:15.389370 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.389379 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:15.389386 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:15.389449 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:15.418437 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.418470 1225677 cri.go:89] found id: ""
	I1217 01:35:15.418479 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:15.418553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.422466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:15.422548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:15.449297 1225677 cri.go:89] found id: ""
	I1217 01:35:15.449333 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.449343 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:15.449370 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:15.449394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:15.468355 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:15.468385 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.494969 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:15.495005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.543170 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:15.543209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:15.616803 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:15.616829 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:15.616845 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.659996 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:15.660031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.730995 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:15.731034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.758963 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:15.758994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.785562 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:15.785633 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:15.872457 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:15.872494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.904808 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:15.904838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:18.506161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:18.518520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:18.518589 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:18.550949 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.550972 1225677 cri.go:89] found id: ""
	I1217 01:35:18.550982 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:18.551041 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.554800 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:18.554880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:18.582497 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:18.582522 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.582527 1225677 cri.go:89] found id: ""
	I1217 01:35:18.582535 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:18.582594 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.586831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.590486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:18.590560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:18.617401 1225677 cri.go:89] found id: ""
	I1217 01:35:18.617426 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.617436 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:18.617443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:18.617504 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:18.648400 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:18.648458 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.648464 1225677 cri.go:89] found id: ""
	I1217 01:35:18.648472 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:18.648530 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.652380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.655820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:18.655916 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:18.689519 1225677 cri.go:89] found id: ""
	I1217 01:35:18.689544 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.689553 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:18.689560 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:18.689621 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:18.718284 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:18.718306 1225677 cri.go:89] found id: ""
	I1217 01:35:18.718313 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:18.718368 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.722268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:18.722372 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:18.753514 1225677 cri.go:89] found id: ""
	I1217 01:35:18.753542 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.753558 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:18.753567 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:18.753611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:18.771813 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:18.771842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:18.845441 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:18.845463 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:18.845477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.872553 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:18.872582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.922099 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:18.922176 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.950258 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:18.950285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:18.990211 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:18.990241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:19.031127 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:19.031164 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:19.107071 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:19.107109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:19.138299 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:19.138327 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:19.222624 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:19.222660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:21.834640 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:21.845711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:21.845784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:21.895249 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:21.895280 1225677 cri.go:89] found id: ""
	I1217 01:35:21.895292 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:21.895371 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.902322 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:21.902404 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:21.943815 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:21.943857 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:21.943863 1225677 cri.go:89] found id: ""
	I1217 01:35:21.943877 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:21.943963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.949206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.954547 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:21.954640 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:21.988594 1225677 cri.go:89] found id: ""
	I1217 01:35:21.988620 1225677 logs.go:282] 0 containers: []
	W1217 01:35:21.988630 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:21.988636 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:21.988718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:22.024625 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.024646 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.024651 1225677 cri.go:89] found id: ""
	I1217 01:35:22.024660 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:22.024760 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.029143 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.033935 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:22.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:22.067922 1225677 cri.go:89] found id: ""
	I1217 01:35:22.067946 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.067955 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:22.067961 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:22.068020 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:22.097619 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.097641 1225677 cri.go:89] found id: ""
	I1217 01:35:22.097649 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:22.097706 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.101692 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:22.101766 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:22.136868 1225677 cri.go:89] found id: ""
	I1217 01:35:22.136891 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.136900 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:22.136911 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:22.136923 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:22.164209 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:22.164236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:22.208399 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:22.208512 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:22.256618 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:22.256650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.287201 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:22.287237 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.314443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:22.314472 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:22.346752 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:22.346780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:22.445530 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:22.445567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:22.464378 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:22.464409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.554715 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:22.554749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:22.659061 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:22.659103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:22.731143 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.231455 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:25.242812 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:25.242949 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:25.280443 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.280470 1225677 cri.go:89] found id: ""
	I1217 01:35:25.280478 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:25.280536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.284885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:25.285008 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:25.313823 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.313846 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.313852 1225677 cri.go:89] found id: ""
	I1217 01:35:25.313859 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:25.313939 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.317952 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.321539 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:25.321620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:25.354565 1225677 cri.go:89] found id: ""
	I1217 01:35:25.354632 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.354656 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:25.354681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:25.354777 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:25.386743 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.386774 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.386779 1225677 cri.go:89] found id: ""
	I1217 01:35:25.386787 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:25.386857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.390671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.394226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:25.394339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:25.421123 1225677 cri.go:89] found id: ""
	I1217 01:35:25.421212 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.421228 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:25.421236 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:25.421310 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:25.448879 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.448904 1225677 cri.go:89] found id: ""
	I1217 01:35:25.448913 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:25.448971 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.452707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:25.452782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:25.479351 1225677 cri.go:89] found id: ""
	I1217 01:35:25.479379 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.479389 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:25.479399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:25.479410 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:25.577317 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:25.577354 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:25.600156 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:25.600203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:25.679524 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.679600 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:25.679621 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.706792 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:25.706824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.764895 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:25.764934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.796158 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:25.796188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.823684 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:25.823721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:25.857273 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:25.857303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.915963 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:25.916003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.992485 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:25.992520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:28.577965 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:28.588733 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:28.588802 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:28.621192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.621211 1225677 cri.go:89] found id: ""
	I1217 01:35:28.621220 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:28.621279 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.625055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:28.625124 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:28.651718 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:28.651738 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.651742 1225677 cri.go:89] found id: ""
	I1217 01:35:28.651749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:28.651807 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.656353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.660550 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:28.660620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:28.688556 1225677 cri.go:89] found id: ""
	I1217 01:35:28.688580 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.688589 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:28.688596 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:28.688654 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:28.716478 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:28.716503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:28.716508 1225677 cri.go:89] found id: ""
	I1217 01:35:28.716516 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:28.716603 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.720442 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.723785 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:28.723862 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:28.750780 1225677 cri.go:89] found id: ""
	I1217 01:35:28.750807 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.750817 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:28.750823 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:28.750882 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:28.777746 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:28.777772 1225677 cri.go:89] found id: ""
	I1217 01:35:28.777781 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:28.777836 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.781586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:28.781707 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:28.812032 1225677 cri.go:89] found id: ""
	I1217 01:35:28.812062 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.812072 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:28.812081 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:28.812115 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:28.910028 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:28.910067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.938533 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:28.938565 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.982530 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:28.982566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:29.059912 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:29.059948 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:29.087417 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:29.087449 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:29.141591 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:29.141622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:29.162662 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:29.162694 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:29.245511 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:29.245537 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:29.245553 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:29.286747 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:29.286784 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:29.317045 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:29.317075 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:31.896935 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:31.908531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:31.908605 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:31.951663 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:31.951684 1225677 cri.go:89] found id: ""
	I1217 01:35:31.951692 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:31.951746 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.956325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:31.956501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:31.990512 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:31.990578 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:31.990598 1225677 cri.go:89] found id: ""
	I1217 01:35:31.990625 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:31.990708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.994957 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.001450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:32.001597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:32.033107 1225677 cri.go:89] found id: ""
	I1217 01:35:32.033136 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.033146 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:32.033153 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:32.033245 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:32.061118 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.061140 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.061145 1225677 cri.go:89] found id: ""
	I1217 01:35:32.061153 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:32.061208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.065195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.068963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:32.069066 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:32.099914 1225677 cri.go:89] found id: ""
	I1217 01:35:32.099941 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.099951 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:32.099957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:32.100018 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:32.134003 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.134028 1225677 cri.go:89] found id: ""
	I1217 01:35:32.134044 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:32.134101 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.138837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:32.138909 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:32.178095 1225677 cri.go:89] found id: ""
	I1217 01:35:32.178168 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.178193 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:32.178210 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:32.178223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:32.219018 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:32.219049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:32.328076 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:32.328182 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:32.347854 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:32.347887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:32.389069 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:32.389143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.464016 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:32.464052 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.492348 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:32.492466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.519965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:32.520035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:32.589420 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:32.589485 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:32.589506 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:32.615780 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:32.615814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:32.668491 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:32.668527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.253556 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:35.266266 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:35.266344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:35.303632 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.303658 1225677 cri.go:89] found id: ""
	I1217 01:35:35.303667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:35.303726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.307439 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:35.307511 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:35.336107 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.336131 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.336136 1225677 cri.go:89] found id: ""
	I1217 01:35:35.336143 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:35.336196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.340106 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.343587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:35.343667 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:35.374453 1225677 cri.go:89] found id: ""
	I1217 01:35:35.374483 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.374492 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:35.374498 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:35.374560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:35.401769 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.401792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.401798 1225677 cri.go:89] found id: ""
	I1217 01:35:35.401806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:35.401860 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.405507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.409182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:35.409254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:35.437191 1225677 cri.go:89] found id: ""
	I1217 01:35:35.437229 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.437280 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:35.437303 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:35.437454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:35.464026 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.464048 1225677 cri.go:89] found id: ""
	I1217 01:35:35.464056 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:35.464113 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.467752 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:35.467854 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:35.495119 1225677 cri.go:89] found id: ""
	I1217 01:35:35.495143 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.495152 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:35.495161 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:35.495173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.538118 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:35.538157 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.612361 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:35.612398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.642424 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:35.642454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.671140 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:35.671168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.753840 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:35.753879 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:35.791176 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:35.791207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:35.861567 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:35.861588 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:35.861604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.887544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:35.887573 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.930868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:35.930901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:36.035955 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:36.035997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.556940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:38.568341 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:38.568410 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:38.602139 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:38.602163 1225677 cri.go:89] found id: ""
	I1217 01:35:38.602172 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:38.602234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.606168 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:38.606244 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:38.636762 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:38.636782 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:38.636787 1225677 cri.go:89] found id: ""
	I1217 01:35:38.636795 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:38.636849 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.640703 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.644870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:38.644980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:38.672028 1225677 cri.go:89] found id: ""
	I1217 01:35:38.672105 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.672130 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:38.672152 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:38.672252 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:38.702063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:38.702088 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:38.702096 1225677 cri.go:89] found id: ""
	I1217 01:35:38.702104 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:38.702189 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.706075 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.710843 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:38.710923 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:38.739176 1225677 cri.go:89] found id: ""
	I1217 01:35:38.739204 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.739214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:38.739221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:38.739281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:38.765721 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:38.765749 1225677 cri.go:89] found id: ""
	I1217 01:35:38.765759 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:38.765835 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.769950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:38.770026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:38.797985 1225677 cri.go:89] found id: ""
	I1217 01:35:38.798013 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.798023 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:38.798033 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:38.798065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:38.898407 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:38.898448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.917886 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:38.917920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:38.999335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:38.999368 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:38.999384 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:39.041692 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:39.041729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:39.089675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:39.089712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:39.172952 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:39.172988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:39.211704 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:39.211736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:39.241891 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:39.241920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:39.276958 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:39.276988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:39.364067 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:39.364119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:41.897002 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:41.908024 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:41.908100 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:41.937482 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:41.937556 1225677 cri.go:89] found id: ""
	I1217 01:35:41.937569 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:41.937630 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.941542 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:41.941611 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:41.987116 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:41.987139 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:41.987145 1225677 cri.go:89] found id: ""
	I1217 01:35:41.987153 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:41.987206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.991091 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.994831 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:41.994905 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:42.033990 1225677 cri.go:89] found id: ""
	I1217 01:35:42.034016 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.034025 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:42.034031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:42.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:42.065878 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:42.065959 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.065980 1225677 cri.go:89] found id: ""
	I1217 01:35:42.066005 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:42.066122 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.071367 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.076378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:42.076531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:42.123414 1225677 cri.go:89] found id: ""
	I1217 01:35:42.123521 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.123583 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:42.123610 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:42.123706 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:42.163210 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.163302 1225677 cri.go:89] found id: ""
	I1217 01:35:42.163328 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:42.163431 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.168650 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:42.168758 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:42.211741 1225677 cri.go:89] found id: ""
	I1217 01:35:42.211767 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.211777 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:42.211787 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:42.211800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:42.252091 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:42.252126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:42.356409 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:42.356465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:42.377129 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:42.377163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:42.449855 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:42.449879 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:42.449893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:42.476498 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:42.476530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:42.518303 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:42.518337 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.548819 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:42.548852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.578811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:42.578840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:42.658356 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:42.658395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:42.700126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:42.700173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.276979 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:45.301570 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:45.301737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:45.339316 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:45.339342 1225677 cri.go:89] found id: ""
	I1217 01:35:45.339351 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:45.339441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.343543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:45.343652 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:45.374479 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.374552 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.374574 1225677 cri.go:89] found id: ""
	I1217 01:35:45.374600 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:45.374672 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.378901 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.382870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:45.382942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:45.413785 1225677 cri.go:89] found id: ""
	I1217 01:35:45.413816 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.413825 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:45.413832 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:45.413894 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:45.446395 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.446417 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.446423 1225677 cri.go:89] found id: ""
	I1217 01:35:45.446431 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:45.446508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.450414 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.454372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:45.454448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:45.483846 1225677 cri.go:89] found id: ""
	I1217 01:35:45.483918 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.483942 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:45.483963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:45.484039 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:45.515890 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.515962 1225677 cri.go:89] found id: ""
	I1217 01:35:45.515986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:45.516060 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.519980 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:45.520107 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:45.548900 1225677 cri.go:89] found id: ""
	I1217 01:35:45.548984 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.549001 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:45.549011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:45.549023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.594641 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:45.594680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.623072 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:45.623171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:45.701558 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:45.701599 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:45.775358 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:45.775423 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:45.775443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.822675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:45.822712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.904212 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:45.904249 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.934553 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:45.934581 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:45.966200 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:45.966231 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:46.073612 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:46.073651 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:46.092826 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:46.092860 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.626362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:48.637081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:48.637157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:48.663951 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.664018 1225677 cri.go:89] found id: ""
	I1217 01:35:48.664045 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:48.664137 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.667889 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:48.668007 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:48.695424 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:48.695498 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:48.695518 1225677 cri.go:89] found id: ""
	I1217 01:35:48.695570 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:48.695667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.699980 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.703779 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:48.703875 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:48.731347 1225677 cri.go:89] found id: ""
	I1217 01:35:48.731372 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.731381 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:48.731388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:48.731448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:48.761776 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:48.761802 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:48.761808 1225677 cri.go:89] found id: ""
	I1217 01:35:48.761816 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:48.761875 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.766072 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.769796 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:48.769871 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:48.799377 1225677 cri.go:89] found id: ""
	I1217 01:35:48.799404 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.799412 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:48.799418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:48.799477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:48.828149 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:48.828173 1225677 cri.go:89] found id: ""
	I1217 01:35:48.828192 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:48.828254 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.832599 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:48.832717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:48.858554 1225677 cri.go:89] found id: ""
	I1217 01:35:48.858587 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.858597 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:48.858626 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:48.858643 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:48.894472 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:48.894502 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:48.969952 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:48.969978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:48.969994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:49.014023 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:49.014058 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:49.092630 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:49.092671 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:49.197053 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:49.197088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:49.225929 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:49.225963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:49.253145 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:49.253174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:49.301391 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:49.301428 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:49.337786 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:49.337819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:49.367000 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:49.367029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:51.942903 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:51.957586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:51.957662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:52.007996 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.008017 1225677 cri.go:89] found id: ""
	I1217 01:35:52.008026 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:52.008082 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.015080 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:52.015148 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:52.052213 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.052249 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.052255 1225677 cri.go:89] found id: ""
	I1217 01:35:52.052262 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:52.052318 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.056182 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.059959 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:52.060033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:52.090239 1225677 cri.go:89] found id: ""
	I1217 01:35:52.090264 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.090274 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:52.090281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:52.090341 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:52.118854 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:52.118874 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.118879 1225677 cri.go:89] found id: ""
	I1217 01:35:52.118886 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:52.118946 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.125093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.128837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:52.128931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:52.157907 1225677 cri.go:89] found id: ""
	I1217 01:35:52.157936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.157945 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:52.157957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:52.158017 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:52.191428 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.191451 1225677 cri.go:89] found id: ""
	I1217 01:35:52.191459 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:52.191543 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.195375 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:52.195456 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:52.224407 1225677 cri.go:89] found id: ""
	I1217 01:35:52.224468 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.224477 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:52.224486 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:52.224498 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.252950 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:52.252981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.279228 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:52.279258 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:52.298974 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:52.299007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:52.370510 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:52.370544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:52.370588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.418893 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:52.418934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:52.499956 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:52.499992 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:52.542158 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:52.542187 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:52.643325 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:52.643367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.671238 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:52.671267 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.712214 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:52.712252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.294635 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:55.305795 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:55.305897 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:55.341120 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.341143 1225677 cri.go:89] found id: ""
	I1217 01:35:55.341152 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:55.341208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.345154 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:55.345236 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:55.376865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.376937 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.376959 1225677 cri.go:89] found id: ""
	I1217 01:35:55.376982 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:55.377065 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.381380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.385355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:55.385472 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:55.412679 1225677 cri.go:89] found id: ""
	I1217 01:35:55.412701 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.412710 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:55.412716 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:55.412773 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:55.439554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.439573 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.439578 1225677 cri.go:89] found id: ""
	I1217 01:35:55.439585 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:55.439639 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.443337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.446737 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:55.446804 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:55.478015 1225677 cri.go:89] found id: ""
	I1217 01:35:55.478039 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.478052 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:55.478065 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:55.478136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:55.503877 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:55.503940 1225677 cri.go:89] found id: ""
	I1217 01:35:55.503964 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:55.504038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.507809 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:55.507880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:55.539899 1225677 cri.go:89] found id: ""
	I1217 01:35:55.539926 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.539935 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:55.539951 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:55.539963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:55.642073 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:55.642111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:55.662102 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:55.662143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.689162 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:55.689192 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.728771 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:55.728804 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.755851 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:55.755878 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:55.839759 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:55.839805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:55.910162 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:55.910183 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:55.910197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.962626 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:55.962664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:56.057075 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:56.057126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:56.095037 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:56.095069 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:58.632280 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:58.643092 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:58.643199 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:58.670245 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:58.670268 1225677 cri.go:89] found id: ""
	I1217 01:35:58.670277 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:58.670332 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.673988 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:58.674059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:58.706113 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:58.706135 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:58.706140 1225677 cri.go:89] found id: ""
	I1217 01:35:58.706148 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:58.706234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.710732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.714631 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:58.714747 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:58.742956 1225677 cri.go:89] found id: ""
	I1217 01:35:58.742982 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.742991 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:58.742997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:58.743058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:58.774022 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:58.774044 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:58.774050 1225677 cri.go:89] found id: ""
	I1217 01:35:58.774058 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:58.774112 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.778073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.781607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:58.781686 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:58.808679 1225677 cri.go:89] found id: ""
	I1217 01:35:58.808703 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.808719 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:58.808725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:58.808785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:58.835922 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:58.835942 1225677 cri.go:89] found id: ""
	I1217 01:35:58.835951 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:58.836007 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.839615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:58.839689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:58.866788 1225677 cri.go:89] found id: ""
	I1217 01:35:58.866813 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.866823 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:58.866833 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:58.866866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:58.968702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:58.968738 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:58.989939 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:58.989967 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:59.058020 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:59.058046 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:59.058059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:59.088364 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:59.088394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:59.141100 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:59.141135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:59.232851 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:59.232891 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:59.262771 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:59.262800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:59.290187 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:59.290224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:59.339890 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:59.339924 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:59.422198 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:59.422236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:01.956538 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:01.967590 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:01.967660 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:02.007538 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.007575 1225677 cri.go:89] found id: ""
	I1217 01:36:02.007584 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:02.007670 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.012001 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:02.012136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:02.046710 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.046735 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.046741 1225677 cri.go:89] found id: ""
	I1217 01:36:02.046749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:02.046804 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.050667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.054450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:02.054546 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:02.081851 1225677 cri.go:89] found id: ""
	I1217 01:36:02.081880 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.081890 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:02.081897 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:02.081980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:02.112077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.112101 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.112106 1225677 cri.go:89] found id: ""
	I1217 01:36:02.112114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:02.112169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.116263 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.121396 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:02.121492 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:02.152376 1225677 cri.go:89] found id: ""
	I1217 01:36:02.152404 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.152497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:02.152523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:02.152642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:02.187133 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.187159 1225677 cri.go:89] found id: ""
	I1217 01:36:02.187168 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:02.187247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.191078 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:02.191173 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:02.220566 1225677 cri.go:89] found id: ""
	I1217 01:36:02.220593 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.220602 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:02.220611 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:02.220659 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.253992 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:02.254021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.304043 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:02.304077 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.350981 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:02.351020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.431358 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:02.431393 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.458269 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:02.458298 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:02.561780 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:02.561820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:02.582487 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:02.582522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:02.663558 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:02.663583 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:02.663596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.700536 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:02.700568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:02.775505 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:02.775547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.310734 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:05.322909 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:05.322985 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:05.350653 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.350738 1225677 cri.go:89] found id: ""
	I1217 01:36:05.350762 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:05.350819 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.355346 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:05.355461 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:05.385411 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:05.385439 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.385445 1225677 cri.go:89] found id: ""
	I1217 01:36:05.385453 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:05.385511 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.389761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.393387 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:05.393463 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:05.420412 1225677 cri.go:89] found id: ""
	I1217 01:36:05.420495 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.420505 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:05.420511 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:05.420569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:05.452034 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:05.452060 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.452066 1225677 cri.go:89] found id: ""
	I1217 01:36:05.452075 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:05.452131 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.456205 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.460128 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:05.460221 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:05.486956 1225677 cri.go:89] found id: ""
	I1217 01:36:05.486986 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.486995 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:05.487002 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:05.487063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:05.518138 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.518160 1225677 cri.go:89] found id: ""
	I1217 01:36:05.518169 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:05.518227 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.522038 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:05.522112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:05.552883 1225677 cri.go:89] found id: ""
	I1217 01:36:05.552951 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.552969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:05.552980 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:05.552994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.580975 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:05.581006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:05.677135 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:05.677178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:05.697133 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:05.697163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.725150 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:05.725181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.768358 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:05.768396 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.794846 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:05.794876 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:05.871841 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:05.871921 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.905951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:05.905982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:05.976460 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:05.976482 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:05.976495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:06.030179 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:06.030260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.614353 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:08.625446 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:08.625527 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:08.652272 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.652300 1225677 cri.go:89] found id: ""
	I1217 01:36:08.652309 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:08.652372 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.656164 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:08.656237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:08.682167 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.682186 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:08.682190 1225677 cri.go:89] found id: ""
	I1217 01:36:08.682198 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:08.682258 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.686632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.690338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:08.690409 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:08.717708 1225677 cri.go:89] found id: ""
	I1217 01:36:08.717732 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.717741 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:08.717748 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:08.717805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:08.754193 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.754217 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:08.754222 1225677 cri.go:89] found id: ""
	I1217 01:36:08.754229 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:08.754285 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.758295 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.761917 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:08.762011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:08.793723 1225677 cri.go:89] found id: ""
	I1217 01:36:08.793750 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.793761 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:08.793774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:08.793833 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:08.820995 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:08.821018 1225677 cri.go:89] found id: ""
	I1217 01:36:08.821027 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:08.821109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.824969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:08.825043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:08.850861 1225677 cri.go:89] found id: ""
	I1217 01:36:08.850896 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.850906 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:08.850917 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:08.850929 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:08.927540 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:08.927562 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:08.927576 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.953082 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:08.953110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.994744 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:08.994781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:09.027277 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:09.027305 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:09.056339 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:09.056367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:09.129785 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:09.129820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:09.161526 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:09.161607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:09.261869 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:09.261908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:09.282618 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:09.282652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:09.328912 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:09.328949 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:11.909228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:11.920145 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:11.920215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:11.953558 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:11.953581 1225677 cri.go:89] found id: ""
	I1217 01:36:11.953589 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:11.953643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.957221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:11.957293 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:11.984240 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:11.984263 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:11.984268 1225677 cri.go:89] found id: ""
	I1217 01:36:11.984276 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:11.984336 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.987996 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.991849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:11.991924 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:12.022066 1225677 cri.go:89] found id: ""
	I1217 01:36:12.022096 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.022106 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:12.022113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:12.022174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:12.058540 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.058563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.058569 1225677 cri.go:89] found id: ""
	I1217 01:36:12.058577 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:12.058629 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.063379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.067419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:12.067548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:12.095872 1225677 cri.go:89] found id: ""
	I1217 01:36:12.095900 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.095922 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:12.095929 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:12.095998 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:12.134836 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.134910 1225677 cri.go:89] found id: ""
	I1217 01:36:12.134933 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:12.135022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.139454 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:12.139524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:12.178455 1225677 cri.go:89] found id: ""
	I1217 01:36:12.178481 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.178491 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:12.178500 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:12.178538 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.215176 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:12.215204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:12.304978 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:12.305015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:12.342716 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:12.342745 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:12.444908 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:12.444945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:12.463288 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:12.463316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:12.536568 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:12.536589 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:12.536603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:12.576446 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:12.576479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.652969 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:12.653004 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.684862 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:12.684893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:12.713785 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:12.713815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:15.267669 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:15.282407 1225677 out.go:203] 
	W1217 01:36:15.285472 1225677 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 01:36:15.285518 1225677 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 01:36:15.285531 1225677 out.go:285] * Related issues:
	* Related issues:
	W1217 01:36:15.285545 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1217 01:36:15.285561 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1217 01:36:15.288521 1225677 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-202151 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-202151
helpers_test.go:244: (dbg) docker inspect ha-202151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	        "Created": "2025-12-17T01:12:34.697109094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1225803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:28:24.223784082Z",
	            "FinishedAt": "2025-12-17T01:28:23.510213695Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d-json.log",
	        "Name": "/ha-202151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-202151:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-202151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	                "LowerDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-202151",
	                "Source": "/var/lib/docker/volumes/ha-202151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-202151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-202151",
	                "name.minikube.sigs.k8s.io": "ha-202151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a8bfe290f37deb1c3104d9ab559bda078e71c5706919642a39ad4ea7fcab4f9",
	            "SandboxKey": "/var/run/docker/netns/1a8bfe290f37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33961"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-202151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:fe:96:8f:04:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e224ccab4890fdef242aee82a08ae93dfe44ddd1860f17db152892136a611dec",
	                    "EndpointID": "d9f94b3340492bc0b924fd0e2620aaaaec200a88061066241297f013a7336f77",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-202151",
	                        "0d1af93acb20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-202151 -n ha-202151
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 logs -n 25: (2.114731015s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151-m04:/home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp testdata/cp-test.txt ha-202151-m04:/home/docker/cp-test.txt                                                             │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m04.txt │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m04_ha-202151.txt                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151.txt                                                 │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m02 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node start m02 --alsologtostderr -v 5                                                                                      │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │                     │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │                     │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:25 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5                                                                                   │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:27 UTC │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │                     │
	│ node    │ ha-202151 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:27 UTC │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:28 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:28:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:28:23.957919 1225677 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:28:23.958241 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958276 1225677 out.go:374] Setting ErrFile to fd 2...
	I1217 01:28:23.958300 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958577 1225677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:28:23.958999 1225677 out.go:368] Setting JSON to false
	I1217 01:28:23.959883 1225677 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25854,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:28:23.959981 1225677 start.go:143] virtualization:  
	I1217 01:28:23.963109 1225677 out.go:179] * [ha-202151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:28:23.966861 1225677 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:28:23.967008 1225677 notify.go:221] Checking for updates...
	I1217 01:28:23.972825 1225677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:28:23.975704 1225677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:23.978560 1225677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:28:23.981565 1225677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:28:23.984558 1225677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:28:23.987973 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:23.988577 1225677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:28:24.018679 1225677 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:28:24.018817 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.078613 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.06901697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.078731 1225677 docker.go:319] overlay module found
	I1217 01:28:24.081724 1225677 out.go:179] * Using the docker driver based on existing profile
	I1217 01:28:24.084659 1225677 start.go:309] selected driver: docker
	I1217 01:28:24.084679 1225677 start.go:927] validating driver "docker" against &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.084825 1225677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:28:24.084933 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.139102 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.130176461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.139528 1225677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:28:24.139560 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:24.139616 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:24.139662 1225677 start.go:353] cluster config:
	{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.142829 1225677 out.go:179] * Starting "ha-202151" primary control-plane node in "ha-202151" cluster
	I1217 01:28:24.145513 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:24.148343 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:24.151136 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:24.151182 1225677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 01:28:24.151172 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:24.151191 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:24.151281 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:24.151292 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:24.151447 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.170893 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:24.170917 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:24.170932 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:24.170962 1225677 start.go:360] acquireMachinesLock for ha-202151: {Name:mk96d245790ddb7861f0cddd8ac09eba6d29a858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:24.171020 1225677 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-202151"
	I1217 01:28:24.171043 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:24.171052 1225677 fix.go:54] fixHost starting: 
	I1217 01:28:24.171312 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.188404 1225677 fix.go:112] recreateIfNeeded on ha-202151: state=Stopped err=<nil>
	W1217 01:28:24.188458 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:24.191811 1225677 out.go:252] * Restarting existing docker container for "ha-202151" ...
	I1217 01:28:24.191909 1225677 cli_runner.go:164] Run: docker start ha-202151
	I1217 01:28:24.438707 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.459881 1225677 kic.go:430] container "ha-202151" state is running.
	I1217 01:28:24.460741 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:24.487033 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.487599 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:24.487676 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:24.511372 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:24.513726 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:24.513748 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:24.516008 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:28:27.648958 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.648981 1225677 ubuntu.go:182] provisioning hostname "ha-202151"
	I1217 01:28:27.649043 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.671053 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.671376 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.671387 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151 && echo "ha-202151" | sudo tee /etc/hostname
	I1217 01:28:27.816001 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.816128 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.833557 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.833865 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.833885 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:27.968607 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:27.968638 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:27.968669 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:27.968686 1225677 provision.go:84] configureAuth start
	I1217 01:28:27.968751 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:27.986183 1225677 provision.go:143] copyHostCerts
	I1217 01:28:27.986244 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986288 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:27.986301 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986379 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:27.986471 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986493 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:27.986502 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986530 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:27.986576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986601 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:27.986609 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986637 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:27.986687 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151 san=[127.0.0.1 192.168.49.2 ha-202151 localhost minikube]
	I1217 01:28:28.161966 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:28.162074 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:28.162136 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.180162 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.276314 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:28.276374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:28.294399 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:28.294463 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1217 01:28:28.312546 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:28.312611 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:28.329872 1225677 provision.go:87] duration metric: took 361.168151ms to configureAuth
	I1217 01:28:28.329900 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:28.330141 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:28.330260 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.347687 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:28.348017 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:28.348037 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:28.719002 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:28.719025 1225677 machine.go:97] duration metric: took 4.231409969s to provisionDockerMachine
	I1217 01:28:28.719036 1225677 start.go:293] postStartSetup for "ha-202151" (driver="docker")
	I1217 01:28:28.719047 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:28.719106 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:28.719158 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.741197 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.836254 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:28.839569 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:28.839599 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:28.839611 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:28.839667 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:28.839747 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:28.839758 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:28.839856 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:28.847310 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:28.864518 1225677 start.go:296] duration metric: took 145.466453ms for postStartSetup
	I1217 01:28:28.864667 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:28.864709 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.882572 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.974073 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:28.979262 1225677 fix.go:56] duration metric: took 4.808204011s for fixHost
	I1217 01:28:28.979289 1225677 start.go:83] releasing machines lock for "ha-202151", held for 4.808256014s
	I1217 01:28:28.979366 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:29.000545 1225677 ssh_runner.go:195] Run: cat /version.json
	I1217 01:28:29.000593 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:29.000605 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.000678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.017863 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.030045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.205586 1225677 ssh_runner.go:195] Run: systemctl --version
	I1217 01:28:29.212211 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:29.247878 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:29.252247 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:29.252372 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:29.260987 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:29.261012 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:29.261044 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:29.261091 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:29.276500 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:29.289977 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:29.290113 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:29.306150 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:29.319359 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:29.442260 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:29.554130 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:29.554229 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:29.569409 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:29.582225 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:29.693269 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:29.815821 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:29.829762 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:29.843587 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:29.843675 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.852929 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:29.853026 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.862094 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.870988 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.879860 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:29.888714 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.897427 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.906242 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.915392 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:29.923247 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:29.930867 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.085763 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:28:30.268466 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:28:30.268540 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:28:30.272645 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:28:30.272717 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:28:30.276359 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:28:30.302094 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:28:30.302194 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.329875 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.364988 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:28:30.367851 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:28:30.383155 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:28:30.387105 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.397488 1225677 kubeadm.go:884] updating cluster {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:28:30.397642 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:30.397701 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.434465 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.434490 1225677 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:28:30.434546 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.461597 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.461622 1225677 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:28:30.461631 1225677 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 01:28:30.461733 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:28:30.461815 1225677 ssh_runner.go:195] Run: crio config
	I1217 01:28:30.524993 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:30.525016 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:30.525041 1225677 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:28:30.525063 1225677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-202151 NodeName:ha-202151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:28:30.525197 1225677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-202151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:28:30.525219 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:28:30.525269 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:28:30.537247 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:30.537359 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:28:30.537423 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:28:30.545256 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:28:30.545330 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 01:28:30.553189 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 01:28:30.566160 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:28:30.579061 1225677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 01:28:30.591667 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:28:30.604079 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:28:30.607859 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.617660 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.737827 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:28:30.755642 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.2
	I1217 01:28:30.755663 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:28:30.755694 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:30.755839 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:28:30.755906 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:28:30.755919 1225677 certs.go:257] generating profile certs ...
	I1217 01:28:30.755998 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:28:30.756031 1225677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698
	I1217 01:28:30.756050 1225677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1217 01:28:31.070955 1225677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 ...
	I1217 01:28:31.071062 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698: {Name:mke1b333e19e123d757f2361ffab64b3ce630ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071323 1225677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 ...
	I1217 01:28:31.071369 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698: {Name:mk12d8ef8dbb1ef8ff84c5ba8c83b430a9515230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071553 1225677 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:28:31.071777 1225677 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:28:31.071982 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:28:31.072020 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:28:31.072053 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:28:31.072099 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:28:31.072142 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:28:31.072179 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:28:31.072222 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:28:31.072260 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:28:31.072291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:28:31.072379 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:28:31.072496 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:28:31.072540 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:28:31.072623 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:28:31.072699 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:28:31.072755 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:28:31.072888 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:31.072995 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.073038 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.073074 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.073717 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:28:31.098054 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:28:31.121354 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:28:31.140746 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:28:31.159713 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:28:31.178284 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:28:31.196338 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:28:31.214382 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:28:31.231910 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:28:31.249283 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:28:31.267150 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:28:31.284464 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:28:31.297370 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:28:31.303511 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.310796 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:28:31.318435 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322279 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322380 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.363578 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:28:31.371139 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.378596 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:28:31.385983 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389802 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389911 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.449546 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:28:31.463605 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.474127 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:28:31.484475 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489596 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489713 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.551435 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:28:31.559450 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:28:31.573170 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:28:31.639157 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:28:31.715122 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:28:31.783477 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:28:31.844822 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:28:31.905215 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:28:31.967945 1225677 kubeadm.go:401] StartCluster: {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:31.968163 1225677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:28:31.968241 1225677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:28:32.018626 1225677 cri.go:89] found id: "9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c"
	I1217 01:28:32.018691 1225677 cri.go:89] found id: "b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43"
	I1217 01:28:32.018711 1225677 cri.go:89] found id: "d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e"
	I1217 01:28:32.018735 1225677 cri.go:89] found id: "f70584959dd02aedc5247d28de369b3dfbec762797364a5b46746119bcd380ba"
	I1217 01:28:32.018753 1225677 cri.go:89] found id: "82cc4882889dc4d930d89f36ac77114d0161f4172216bc47431b8697c0630be5"
	I1217 01:28:32.018781 1225677 cri.go:89] found id: ""
	I1217 01:28:32.018853 1225677 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 01:28:32.044061 1225677 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T01:28:32Z" level=error msg="open /run/runc: no such file or directory"
	I1217 01:28:32.044185 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:28:32.052950 1225677 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:28:32.053010 1225677 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:28:32.053080 1225677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:28:32.061188 1225677 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:32.061654 1225677 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-202151" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.061797 1225677 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-202151" cluster setting kubeconfig missing "ha-202151" context setting]
	I1217 01:28:32.062106 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.062698 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:28:32.063465 1225677 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:28:32.063546 1225677 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:28:32.063583 1225677 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:28:32.063613 1225677 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:28:32.063651 1225677 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:28:32.063976 1225677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:28:32.063525 1225677 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 01:28:32.081817 1225677 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 01:28:32.081837 1225677 kubeadm.go:602] duration metric: took 28.80443ms to restartPrimaryControlPlane
	I1217 01:28:32.081846 1225677 kubeadm.go:403] duration metric: took 113.913079ms to StartCluster
	I1217 01:28:32.081861 1225677 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.081919 1225677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.082486 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.082669 1225677 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:28:32.082688 1225677 start.go:242] waiting for startup goroutines ...
	I1217 01:28:32.082706 1225677 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:28:32.083152 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.086942 1225677 out.go:179] * Enabled addons: 
	I1217 01:28:32.089944 1225677 addons.go:530] duration metric: took 7.236595ms for enable addons: enabled=[]
	I1217 01:28:32.089983 1225677 start.go:247] waiting for cluster config update ...
	I1217 01:28:32.089992 1225677 start.go:256] writing updated cluster config ...
	I1217 01:28:32.093327 1225677 out.go:203] 
	I1217 01:28:32.096604 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.096790 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.100238 1225677 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	I1217 01:28:32.103257 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:32.106243 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:32.109227 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:32.109291 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:32.109420 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:32.109454 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:32.109592 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.109854 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:32.139073 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:32.139092 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:32.139106 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:32.139130 1225677 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:32.139181 1225677 start.go:364] duration metric: took 36.692µs to acquireMachinesLock for "ha-202151-m02"
	I1217 01:28:32.139199 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:32.139204 1225677 fix.go:54] fixHost starting: m02
	I1217 01:28:32.139463 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.170663 1225677 fix.go:112] recreateIfNeeded on ha-202151-m02: state=Stopped err=<nil>
	W1217 01:28:32.170689 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:32.173829 1225677 out.go:252] * Restarting existing docker container for "ha-202151-m02" ...
	I1217 01:28:32.173910 1225677 cli_runner.go:164] Run: docker start ha-202151-m02
	I1217 01:28:32.543486 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.572710 1225677 kic.go:430] container "ha-202151-m02" state is running.
	I1217 01:28:32.573066 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:32.602951 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.603208 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:32.603266 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:32.629641 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:32.629950 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:32.629959 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:32.630596 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37710->127.0.0.1:33963: read: connection reset by peer
	I1217 01:28:35.808896 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:35.808924 1225677 ubuntu.go:182] provisioning hostname "ha-202151-m02"
	I1217 01:28:35.808996 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:35.842137 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:35.842447 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:35.842466 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
	I1217 01:28:36.038050 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:36.038178 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.082250 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:36.082569 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:36.082593 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:36.332805 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:36.332901 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:36.332944 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:36.332991 1225677 provision.go:84] configureAuth start
	I1217 01:28:36.333104 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:36.366101 1225677 provision.go:143] copyHostCerts
	I1217 01:28:36.366154 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366188 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:36.366198 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366291 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:36.366454 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366479 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:36.366484 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366514 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:36.366576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366600 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:36.366604 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366636 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:36.366685 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
	I1217 01:28:36.714448 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:36.714609 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:36.714700 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.737234 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:36.864039 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:36.864124 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:36.913291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:36.913360 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:28:36.977060 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:36.977210 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:37.077043 1225677 provision.go:87] duration metric: took 744.017822ms to configureAuth
	I1217 01:28:37.077119 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:37.077458 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:37.077641 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:37.114203 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:37.114614 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:37.114630 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:38.749167 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:38.749190 1225677 machine.go:97] duration metric: took 6.145972988s to provisionDockerMachine
	I1217 01:28:38.749202 1225677 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
	I1217 01:28:38.749218 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:38.749280 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:38.749320 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.798164 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:38.934750 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:38.938751 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:38.938784 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:38.938805 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:38.938890 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:38.939022 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:38.939035 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:38.939161 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:38.949374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:38.977662 1225677 start.go:296] duration metric: took 228.444359ms for postStartSetup
	I1217 01:28:38.977768 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:38.977833 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.997045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.094589 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:39.100157 1225677 fix.go:56] duration metric: took 6.9609442s for fixHost
	I1217 01:28:39.100185 1225677 start.go:83] releasing machines lock for "ha-202151-m02", held for 6.960996095s
	I1217 01:28:39.100277 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:39.121509 1225677 out.go:179] * Found network options:
	I1217 01:28:39.124537 1225677 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 01:28:39.127500 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:28:39.127546 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 01:28:39.127633 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:39.127678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.127731 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:39.127813 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.159911 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.160356 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.389362 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:39.518196 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:39.518280 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:39.530690 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:39.530730 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:39.530766 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:39.530828 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:39.559452 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:39.590703 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:39.590778 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:39.623053 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:39.646277 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:39.924657 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:40.211696 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:40.211818 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:40.234789 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:40.255311 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:40.483522 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:40.697787 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:40.728627 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:40.773025 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:40.773101 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.810962 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:40.811053 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.830095 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.843899 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.859512 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:40.875469 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.891423 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.906705 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.920139 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:40.935324 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:40.949872 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:41.265195 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:11.765812 1225677 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.500580562s)
	I1217 01:30:11.765836 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:11.765895 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:11.773685 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:30:11.773748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:30:11.777914 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:30:11.832219 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:30:11.832561 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.883307 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.931713 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:30:11.934749 1225677 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 01:30:11.937773 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:30:11.958180 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:11.963975 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:11.980941 1225677 mustload.go:66] Loading cluster: ha-202151
	I1217 01:30:11.981196 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:11.981523 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:30:12.010212 1225677 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:30:12.010538 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
	I1217 01:30:12.010547 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:30:12.010562 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:12.010679 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:30:12.010721 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:30:12.010729 1225677 certs.go:257] generating profile certs ...
	I1217 01:30:12.010806 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:30:12.010871 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730
	I1217 01:30:12.010909 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:30:12.010918 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:30:12.010930 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:30:12.010942 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:30:12.010952 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:30:12.010963 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:30:12.010976 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:30:12.010988 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:30:12.010998 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:30:12.011046 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:30:12.011099 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:12.011108 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:30:12.011142 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:30:12.011167 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:12.011226 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:30:12.011276 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:30:12.011308 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.011330 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.011341 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.011405 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:30:12.040530 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:30:12.140835 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:30:12.145679 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:30:12.155103 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:30:12.158946 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:30:12.168468 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:30:12.172730 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:30:12.182622 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:30:12.186892 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:30:12.196428 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:30:12.200769 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:30:12.210174 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:30:12.214229 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:30:12.223408 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:12.242760 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:12.263233 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:12.281118 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:12.299303 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:30:12.317115 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:12.334779 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:12.352592 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:30:12.370481 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:30:12.389095 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:30:12.412594 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:12.449315 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:30:12.473400 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:30:12.494693 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:30:12.517806 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:30:12.543454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:30:12.563454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:30:12.583785 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:30:12.603782 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:30:12.611317 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.622461 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:30:12.631322 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635830 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635962 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.683099 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:12.692252 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.701723 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:12.714594 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719579 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719716 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.763558 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:12.772848 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.782803 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:30:12.792174 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.797950 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.798068 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.843461 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:12.852350 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:12.856738 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:12.902677 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:12.948658 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:12.994789 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:13.042684 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:13.096054 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:13.158401 1225677 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1217 01:30:13.158570 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:13.158615 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:30:13.158706 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:30:13.173582 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:30:13.173705 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:30:13.173834 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:13.183901 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:13.184021 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:30:13.192889 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:30:13.208806 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:13.224983 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:30:13.240987 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:13.245030 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:13.255387 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.401843 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.417093 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:13.416720 1225677 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:30:13.423303 1225677 out.go:179] * Verifying Kubernetes components...
	I1217 01:30:13.426149 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.647974 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.667990 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:30:13.668105 1225677 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:30:13.668438 1225677 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201323 1225677 node_ready.go:49] node "ha-202151-m02" is "Ready"
	I1217 01:30:14.201352 1225677 node_ready.go:38] duration metric: took 532.861298ms for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201366 1225677 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:30:14.201430 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:14.702397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.202165 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.701679 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.202436 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.701593 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.202167 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.702134 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.201871 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.202178 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.702421 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.201608 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.701963 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.201849 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.702468 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.201659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.702284 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.202447 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.701767 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.201870 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.701725 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.202161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.701566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.201668 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.702034 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.202090 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.201787 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.701530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.202044 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.702049 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.202554 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.201868 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.702179 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.202396 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.702380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.701675 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.201765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.701936 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.201563 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.701569 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.202228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.702471 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.201812 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.701808 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.201588 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.701513 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.202142 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.701610 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.201867 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.702427 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.202172 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.202404 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.701704 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.201454 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.702205 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.201850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.702118 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.201665 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.702497 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.201634 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.701590 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.202217 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.202252 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.701540 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.702332 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.202380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.701545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.202215 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.701654 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.202277 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.701599 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.202236 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.702370 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.201552 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.702331 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.201545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.202549 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.202225 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.701571 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.202016 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.702392 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.212791 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.701639 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.202292 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.701781 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.201523 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.701618 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.201666 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.702192 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.202218 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.701749 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.201582 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.701583 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.201568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.702305 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.202030 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.702244 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.201601 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.702328 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.202314 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.701594 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.202413 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.701574 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.201566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.702440 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.701568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.202474 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.701537 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:13.701628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:13.737091 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:13.737114 1225677 cri.go:89] found id: ""
	I1217 01:31:13.737124 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:13.737180 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.741133 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:13.741205 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:13.767828 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:13.767849 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:13.767854 1225677 cri.go:89] found id: ""
	I1217 01:31:13.767861 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:13.767916 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.772125 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.775836 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:13.775913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:13.807345 1225677 cri.go:89] found id: ""
	I1217 01:31:13.807369 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.807377 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:13.807384 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:13.807444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:13.838797 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:13.838817 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:13.838821 1225677 cri.go:89] found id: ""
	I1217 01:31:13.838829 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:13.838887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.843081 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.846896 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:13.846969 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:13.886939 1225677 cri.go:89] found id: ""
	I1217 01:31:13.886968 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.886977 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:13.886983 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:13.887045 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:13.927324 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:13.927350 1225677 cri.go:89] found id: ""
	I1217 01:31:13.927359 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:13.927418 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.932191 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:13.932281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:13.963576 1225677 cri.go:89] found id: ""
	I1217 01:31:13.963605 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.963614 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:13.963623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:13.963636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:14.061267 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:14.061313 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:14.083208 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:14.083318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:14.113297 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:14.113328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:14.168503 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:14.168540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:14.225258 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:14.225299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:14.254658 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:14.254688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:14.329954 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:14.329994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:14.363830 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:14.363859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:14.780185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:14.780213 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:14.780229 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:14.821746 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:14.821787 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.348276 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:17.359506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:17.359576 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:17.385494 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.385522 1225677 cri.go:89] found id: ""
	I1217 01:31:17.385531 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:17.385587 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.389291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:17.389381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:17.417467 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:17.417488 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:17.417493 1225677 cri.go:89] found id: ""
	I1217 01:31:17.417501 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:17.417557 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.421553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.425305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:17.425381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:17.452893 1225677 cri.go:89] found id: ""
	I1217 01:31:17.452925 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.452935 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:17.452945 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:17.453003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:17.479708 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.479730 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.479736 1225677 cri.go:89] found id: ""
	I1217 01:31:17.479743 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:17.479799 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.484009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.487543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:17.487617 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:17.522723 1225677 cri.go:89] found id: ""
	I1217 01:31:17.522751 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.522760 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:17.522767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:17.522829 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:17.550998 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.551023 1225677 cri.go:89] found id: ""
	I1217 01:31:17.551032 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:17.551086 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.554682 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:17.554767 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:17.587610 1225677 cri.go:89] found id: ""
	I1217 01:31:17.587650 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.587659 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:17.587684 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:17.587709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.616971 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:17.617002 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:17.692991 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:17.693034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:17.741052 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:17.741081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:17.761199 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:17.761228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.792936 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:17.793007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.845716 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:17.845753 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.881065 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:17.881096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:17.982043 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:17.982082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:18.070492 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:18.070517 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:18.070531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:18.117818 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:18.117911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.668542 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:20.679148 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:20.679242 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:20.706664 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:20.706687 1225677 cri.go:89] found id: ""
	I1217 01:31:20.706697 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:20.706757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.711072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:20.711147 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:20.737754 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:20.737779 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.737784 1225677 cri.go:89] found id: ""
	I1217 01:31:20.737792 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:20.737847 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.741755 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.745506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:20.745577 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:20.778364 1225677 cri.go:89] found id: ""
	I1217 01:31:20.778386 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.778394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:20.778400 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:20.778458 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:20.807237 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.807262 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:20.807267 1225677 cri.go:89] found id: ""
	I1217 01:31:20.807275 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:20.807361 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.811689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.815755 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:20.815857 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:20.842433 1225677 cri.go:89] found id: ""
	I1217 01:31:20.842454 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.842464 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:20.842470 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:20.842526 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:20.869792 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:20.869821 1225677 cri.go:89] found id: ""
	I1217 01:31:20.869831 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:20.869887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.873765 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:20.873847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:20.900911 1225677 cri.go:89] found id: ""
	I1217 01:31:20.900940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.900952 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:20.900961 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:20.900974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.954883 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:20.954920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:21.002822 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:21.002852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:21.108368 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:21.108406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:21.135557 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:21.135588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:21.176576 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:21.176610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:21.205927 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:21.205961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:21.232870 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:21.232897 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:21.312344 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:21.312377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:21.333806 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:21.333836 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:21.415860 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:21.415895 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:21.415909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:23.961577 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:23.974520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:23.974616 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:24.008513 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.008538 1225677 cri.go:89] found id: ""
	I1217 01:31:24.008548 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:24.008627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.013203 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:24.013311 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:24.041344 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.041369 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.041374 1225677 cri.go:89] found id: ""
	I1217 01:31:24.041383 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:24.041499 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.045778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.049690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:24.049764 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:24.076869 1225677 cri.go:89] found id: ""
	I1217 01:31:24.076902 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.076912 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:24.076919 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:24.076982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:24.115429 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.115504 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.115535 1225677 cri.go:89] found id: ""
	I1217 01:31:24.115571 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:24.115649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.121035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.126165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:24.126286 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:24.153228 1225677 cri.go:89] found id: ""
	I1217 01:31:24.153253 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.153262 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:24.153268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:24.153326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:24.196715 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:24.196801 1225677 cri.go:89] found id: ""
	I1217 01:31:24.196825 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:24.196912 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.201554 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:24.201642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:24.230189 1225677 cri.go:89] found id: ""
	I1217 01:31:24.230214 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.230223 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:24.230232 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:24.230244 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:24.308144 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:24.308188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:24.326634 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:24.326664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:24.400916 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:24.400938 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:24.400952 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.448701 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:24.448743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.482276 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:24.482309 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:24.515534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:24.515567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:24.625661 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:24.625708 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.652399 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:24.652439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.693518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:24.693556 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.750020 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:24.750059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.278748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:27.290609 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:27.290689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:27.316966 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.316991 1225677 cri.go:89] found id: ""
	I1217 01:31:27.316999 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:27.317054 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.320866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:27.320938 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:27.347398 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.347422 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.347427 1225677 cri.go:89] found id: ""
	I1217 01:31:27.347436 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:27.347496 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.351488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.355369 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:27.355442 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:27.381534 1225677 cri.go:89] found id: ""
	I1217 01:31:27.381564 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.381574 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:27.381580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:27.381662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:27.410739 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.410810 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.410822 1225677 cri.go:89] found id: ""
	I1217 01:31:27.410831 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:27.410892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.415095 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.419246 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:27.419364 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:27.447586 1225677 cri.go:89] found id: ""
	I1217 01:31:27.447612 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.447622 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:27.447629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:27.447693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:27.474916 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.474941 1225677 cri.go:89] found id: ""
	I1217 01:31:27.474950 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:27.475035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.479118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:27.479203 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:27.506051 1225677 cri.go:89] found id: ""
	I1217 01:31:27.506078 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.506087 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:27.506097 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:27.506108 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:27.545535 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:27.545568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:27.641749 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:27.641830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:27.661191 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:27.661226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:27.738097 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:27.738120 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:27.738134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.782011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:27.782048 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.834514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:27.834550 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.905140 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:27.905177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.940830 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:27.940862 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.969106 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:27.969136 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.998807 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:27.998835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:30.578811 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:30.590365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:30.590444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:30.618562 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:30.618585 1225677 cri.go:89] found id: ""
	I1217 01:31:30.618594 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:30.618677 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.623874 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:30.624003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:30.654712 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:30.654734 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.654740 1225677 cri.go:89] found id: ""
	I1217 01:31:30.654747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:30.654831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.658663 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.662256 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:30.662333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:30.690956 1225677 cri.go:89] found id: ""
	I1217 01:31:30.690983 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.691000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:30.691008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:30.691073 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:30.720079 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.720104 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.720110 1225677 cri.go:89] found id: ""
	I1217 01:31:30.720118 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:30.720190 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.724290 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.728443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:30.728569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:30.762597 1225677 cri.go:89] found id: ""
	I1217 01:31:30.762665 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.762683 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:30.762690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:30.762769 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:30.793999 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:30.794022 1225677 cri.go:89] found id: ""
	I1217 01:31:30.794031 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:30.794087 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.798031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:30.798111 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:30.825811 1225677 cri.go:89] found id: ""
	I1217 01:31:30.825838 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.825848 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:30.825858 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:30.825900 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.874308 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:30.874349 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.932548 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:30.932596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.973410 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:30.973440 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:31.061854 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:31.061893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:31.081279 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:31.081308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:31.173788 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:31.173816 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:31.173832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:31.203476 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:31.203507 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:31.242819 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:31.242857 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:31.270107 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:31.270137 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:31.301308 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:31.301338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:33.901065 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:33.913301 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:33.913455 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:33.945005 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:33.945033 1225677 cri.go:89] found id: ""
	I1217 01:31:33.945042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:33.945100 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.949030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:33.949099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:33.980996 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:33.981019 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:33.981024 1225677 cri.go:89] found id: ""
	I1217 01:31:33.981032 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:33.981090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.985533 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.989328 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:33.989424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:34.020066 1225677 cri.go:89] found id: ""
	I1217 01:31:34.020105 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.020115 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:34.020123 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:34.020214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:34.054526 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.054551 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.054558 1225677 cri.go:89] found id: ""
	I1217 01:31:34.054566 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:34.054628 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.058716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.062466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:34.062539 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:34.100752 1225677 cri.go:89] found id: ""
	I1217 01:31:34.100777 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.100787 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:34.100794 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:34.100856 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:34.133409 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.133431 1225677 cri.go:89] found id: ""
	I1217 01:31:34.133440 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:34.133498 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.137315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:34.137386 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:34.169015 1225677 cri.go:89] found id: ""
	I1217 01:31:34.169048 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.169058 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:34.169068 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:34.169081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:34.230112 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:34.230152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:34.275030 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:34.275071 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.303312 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:34.303341 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:34.323613 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:34.323791 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.377596 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:34.377632 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.405931 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:34.405961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:34.485309 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:34.485348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:34.537697 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:34.537780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:34.640362 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:34.640409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:34.719202 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:34.719227 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:34.719241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.248692 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:37.259883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:37.259952 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:37.288047 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.288071 1225677 cri.go:89] found id: ""
	I1217 01:31:37.288092 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:37.288147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.291723 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:37.291791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:37.320405 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.320468 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:37.320473 1225677 cri.go:89] found id: ""
	I1217 01:31:37.320481 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:37.320536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.324331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.327725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:37.327795 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:37.353914 1225677 cri.go:89] found id: ""
	I1217 01:31:37.353940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.353949 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:37.353956 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:37.354033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:37.380050 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.380082 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:37.380088 1225677 cri.go:89] found id: ""
	I1217 01:31:37.380097 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:37.380169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.384466 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.388616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:37.388737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:37.434167 1225677 cri.go:89] found id: ""
	I1217 01:31:37.434203 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.434213 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:37.434235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:37.434327 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:37.463397 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.463418 1225677 cri.go:89] found id: ""
	I1217 01:31:37.463426 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:37.463501 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.467357 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:37.467429 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:37.496476 1225677 cri.go:89] found id: ""
	I1217 01:31:37.496504 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.496514 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:37.496523 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:37.496534 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:37.580269 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:37.580312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:37.598989 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:37.599020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:37.669887 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:37.669956 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:37.669985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.696910 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:37.696934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.741514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:37.741546 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.797620 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:37.797657 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.827250 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:37.827277 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:37.860098 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:37.860127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:37.981956 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:37.982003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:38.045819 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:38.045855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.580761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:40.592635 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:40.592708 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:40.620832 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:40.620856 1225677 cri.go:89] found id: ""
	I1217 01:31:40.620866 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:40.620942 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.624827 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:40.624914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:40.662358 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.662381 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.662386 1225677 cri.go:89] found id: ""
	I1217 01:31:40.662394 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:40.662452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.666347 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.669969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:40.670068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:40.698897 1225677 cri.go:89] found id: ""
	I1217 01:31:40.698922 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.698931 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:40.698938 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:40.699026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:40.726184 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.726254 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.726265 1225677 cri.go:89] found id: ""
	I1217 01:31:40.726273 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:40.726331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.730221 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.734070 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:40.734150 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:40.760090 1225677 cri.go:89] found id: ""
	I1217 01:31:40.760116 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.760125 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:40.760185 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:40.760251 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:40.790670 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:40.790693 1225677 cri.go:89] found id: ""
	I1217 01:31:40.790702 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:40.790754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.794861 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:40.794936 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:40.826103 1225677 cri.go:89] found id: ""
	I1217 01:31:40.826129 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.826138 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:40.826147 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:40.826160 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.878987 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:40.879066 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.924714 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:40.924751 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.980944 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:40.980981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:41.072994 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:41.073031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:41.105014 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:41.105042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:41.212780 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:41.212818 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:41.241014 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:41.241042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:41.277652 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:41.277684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:41.308943 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:41.308972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:41.328092 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:41.328123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:41.410133 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:43.911410 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:43.924272 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:43.924351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:43.953227 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:43.953252 1225677 cri.go:89] found id: ""
	I1217 01:31:43.953261 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:43.953337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.957558 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:43.957674 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:43.984394 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:43.984493 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:43.984513 1225677 cri.go:89] found id: ""
	I1217 01:31:43.984547 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:43.984626 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.988727 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.992395 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:43.992531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:44.023165 1225677 cri.go:89] found id: ""
	I1217 01:31:44.023242 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.023265 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:44.023285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:44.023376 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:44.056175 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.056249 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.056268 1225677 cri.go:89] found id: ""
	I1217 01:31:44.056293 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:44.056373 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.060006 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.063548 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:44.063623 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:44.091849 1225677 cri.go:89] found id: ""
	I1217 01:31:44.091875 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.091886 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:44.091892 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:44.091950 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:44.125771 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.125837 1225677 cri.go:89] found id: ""
	I1217 01:31:44.125861 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:44.125938 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.129707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:44.129781 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:44.157267 1225677 cri.go:89] found id: ""
	I1217 01:31:44.157343 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.157359 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:44.157369 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:44.157380 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:44.179921 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:44.180042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:44.227426 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:44.227495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:44.268056 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:44.268089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:44.312908 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:44.312943 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.344639 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:44.344673 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.370623 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:44.370650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:44.400984 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:44.401017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:44.494253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:44.494291 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:44.563778 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:44.563859 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:44.563887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.630776 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:44.630812 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.217775 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:47.228858 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:47.228999 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:47.258264 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.258287 1225677 cri.go:89] found id: ""
	I1217 01:31:47.258305 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:47.258366 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.262265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:47.262366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:47.293485 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.293508 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.293552 1225677 cri.go:89] found id: ""
	I1217 01:31:47.293562 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:47.293623 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.297395 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.300792 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:47.300866 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:47.329792 1225677 cri.go:89] found id: ""
	I1217 01:31:47.329818 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.329827 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:47.329833 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:47.329890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:47.356681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.356747 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:47.356758 1225677 cri.go:89] found id: ""
	I1217 01:31:47.356767 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:47.356839 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.360948 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.364494 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:47.364598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:47.390993 1225677 cri.go:89] found id: ""
	I1217 01:31:47.391021 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.391031 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:47.391037 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:47.391099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:47.417453 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.417517 1225677 cri.go:89] found id: ""
	I1217 01:31:47.417541 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:47.417618 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.421365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:47.421437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:47.447227 1225677 cri.go:89] found id: ""
	I1217 01:31:47.447254 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.447264 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:47.447273 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:47.447285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.474445 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:47.474475 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:47.546929 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:47.546947 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:47.546962 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.621943 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:47.621985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:47.653654 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:47.653679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:47.751509 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:47.751548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:47.773290 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:47.773323 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.802347 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:47.802378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.849646 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:47.849680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.894275 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:47.894315 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.949242 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:47.949281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.480769 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:50.491711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:50.491827 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:50.519320 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.519345 1225677 cri.go:89] found id: ""
	I1217 01:31:50.519353 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:50.519440 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.523424 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:50.523533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:50.551627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:50.551652 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:50.551658 1225677 cri.go:89] found id: ""
	I1217 01:31:50.551665 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:50.551751 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.555585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.559244 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:50.559347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:50.586218 1225677 cri.go:89] found id: ""
	I1217 01:31:50.586241 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.586249 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:50.586255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:50.586333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:50.618629 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.618661 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.618667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.618675 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:50.618776 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.622850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.626687 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:50.626824 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:50.659667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.659703 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.659713 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:50.659738 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:50.659817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:50.686997 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.687069 1225677 cri.go:89] found id: ""
	I1217 01:31:50.687092 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:50.687160 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.690709 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:50.690823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:50.721432 1225677 cri.go:89] found id: ""
	I1217 01:31:50.721509 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.721534 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:50.721553 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:50.721583 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.748223 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:50.748250 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.807290 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:50.807328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.835575 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:50.835603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.861513 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:50.861539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:50.937079 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:50.937118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:51.023701 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:51.023722 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:51.023736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:51.063322 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:51.063360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:51.134936 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:51.134983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:51.172581 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:51.172611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:51.279920 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:51.279958 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:53.800293 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:53.813493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:53.813572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:53.855699 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:53.855727 1225677 cri.go:89] found id: ""
	I1217 01:31:53.855737 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:53.855790 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.860842 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:53.860915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:53.905688 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:53.905715 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:53.905720 1225677 cri.go:89] found id: ""
	I1217 01:31:53.905727 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:53.905796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.911027 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.916033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:53.916105 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:53.971312 1225677 cri.go:89] found id: ""
	I1217 01:31:53.971339 1225677 logs.go:282] 0 containers: []
	W1217 01:31:53.971349 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:53.971356 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:53.971477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:54.021427 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.021456 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:54.021474 1225677 cri.go:89] found id: ""
	I1217 01:31:54.021488 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:54.021585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.030798 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.035177 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:54.035371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:54.113099 1225677 cri.go:89] found id: ""
	I1217 01:31:54.113124 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.113133 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:54.113139 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:54.113246 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:54.166627 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.166651 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.166658 1225677 cri.go:89] found id: ""
	I1217 01:31:54.166665 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:54.166783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.171754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.182182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:54.182283 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:54.234503 1225677 cri.go:89] found id: ""
	I1217 01:31:54.234567 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.234591 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:54.234615 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:54.234642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.275461 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:54.275532 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:54.366758 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:54.366801 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:54.403474 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:54.403513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:54.422090 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:54.422131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:54.486461 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:54.486497 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.553429 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:54.553466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.599563 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:54.599593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:54.706755 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:54.706795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:54.812798 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:54.812822 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:54.812835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:54.838401 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:54.838433 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:54.893784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:54.893823 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.427168 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:57.438551 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:57.438655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:57.468636 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:57.468660 1225677 cri.go:89] found id: ""
	I1217 01:31:57.468669 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:57.468726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.472745 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:57.472819 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:57.500682 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:57.500702 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.500707 1225677 cri.go:89] found id: ""
	I1217 01:31:57.500714 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:57.500777 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.504719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.508458 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:57.508557 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:57.540789 1225677 cri.go:89] found id: ""
	I1217 01:31:57.540813 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.540822 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:57.540828 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:57.540889 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:57.570366 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.570392 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.570398 1225677 cri.go:89] found id: ""
	I1217 01:31:57.570406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:57.570462 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.574531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.578702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:57.578782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:57.608017 1225677 cri.go:89] found id: ""
	I1217 01:31:57.608042 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.608051 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:57.608058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:57.608122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:57.634195 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:57.634218 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.634224 1225677 cri.go:89] found id: ""
	I1217 01:31:57.634232 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:57.634317 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.638339 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.642068 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:57.642166 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:57.669214 1225677 cri.go:89] found id: ""
	I1217 01:31:57.669250 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.669259 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:57.669268 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:57.669284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.733958 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:57.733991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.790688 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:57.790731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.825378 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:57.825409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:57.903425 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:57.903465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:57.977243 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:57.977266 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:57.977280 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:58.008228 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:58.008262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:58.044832 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:58.044861 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:58.076961 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:58.077009 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:58.174022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:58.174061 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:58.194526 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:58.194561 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:58.225629 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:58.225658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.768659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:00.779781 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:00.779855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:00.809961 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:00.809984 1225677 cri.go:89] found id: ""
	I1217 01:32:00.809993 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:00.810055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.814113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:00.814232 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:00.842110 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.842179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:00.842193 1225677 cri.go:89] found id: ""
	I1217 01:32:00.842202 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:00.842259 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.846284 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.850463 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:00.850535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:00.877321 1225677 cri.go:89] found id: ""
	I1217 01:32:00.877347 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.877357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:00.877364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:00.877424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:00.903950 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:00.904025 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:00.904044 1225677 cri.go:89] found id: ""
	I1217 01:32:00.904065 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:00.904183 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.907995 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.911685 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:00.911762 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:00.940826 1225677 cri.go:89] found id: ""
	I1217 01:32:00.940856 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.940865 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:00.940871 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:00.940931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:00.967056 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:00.967077 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:00.967088 1225677 cri.go:89] found id: ""
	I1217 01:32:00.967097 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:00.967175 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.970953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.975717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:00.975791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:01.010237 1225677 cri.go:89] found id: ""
	I1217 01:32:01.010262 1225677 logs.go:282] 0 containers: []
	W1217 01:32:01.010272 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:01.010281 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:01.010294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:01.030320 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:01.030353 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:01.055381 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:01.055409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:01.097515 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:01.097548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:01.166756 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:01.166797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:01.208792 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:01.208824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:01.246024 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:01.246056 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:01.340436 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:01.340519 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:01.412662 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:01.412684 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:01.412699 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:01.467190 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:01.467228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:01.500459 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:01.500486 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:01.531449 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:01.531477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:04.134627 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:04.145902 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:04.145978 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:04.185746 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.185766 1225677 cri.go:89] found id: ""
	I1217 01:32:04.185774 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:04.185831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.189797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:04.189867 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:04.228673 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.228694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.228698 1225677 cri.go:89] found id: ""
	I1217 01:32:04.228706 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:04.228759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.233260 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.238075 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:04.238212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:04.268955 1225677 cri.go:89] found id: ""
	I1217 01:32:04.268983 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.268992 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:04.268999 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:04.269102 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:04.299973 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.300041 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.300061 1225677 cri.go:89] found id: ""
	I1217 01:32:04.300088 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:04.300185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.303813 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.307456 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:04.307533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:04.334293 1225677 cri.go:89] found id: ""
	I1217 01:32:04.334319 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.334331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:04.334338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:04.334398 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:04.360886 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.360906 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.360910 1225677 cri.go:89] found id: ""
	I1217 01:32:04.360918 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:04.360974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.365024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.368933 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:04.369005 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:04.397116 1225677 cri.go:89] found id: ""
	I1217 01:32:04.397140 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.397149 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:04.397159 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:04.397174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:04.490637 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:04.490721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.531861 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:04.531938 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.577801 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:04.577838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.635487 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:04.635524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.667260 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:04.667290 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:04.718117 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:04.718146 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:04.737680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:04.737711 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:04.825872 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:04.825894 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:04.825908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.858804 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:04.858833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.887920 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:04.887953 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.916371 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:04.916476 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:07.492728 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:07.504442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:07.504532 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:07.538372 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.538403 1225677 cri.go:89] found id: ""
	I1217 01:32:07.538442 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:07.538517 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.542523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:07.542597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:07.576339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:07.576360 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:07.576364 1225677 cri.go:89] found id: ""
	I1217 01:32:07.576372 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:07.576459 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.580149 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.584111 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:07.584196 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:07.610578 1225677 cri.go:89] found id: ""
	I1217 01:32:07.610605 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.610614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:07.610621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:07.610678 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:07.637129 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:07.637151 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:07.637157 1225677 cri.go:89] found id: ""
	I1217 01:32:07.637164 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:07.637217 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.641090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.644872 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:07.644992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:07.679300 1225677 cri.go:89] found id: ""
	I1217 01:32:07.679322 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.679331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:07.679350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:07.679419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:07.719129 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:07.719155 1225677 cri.go:89] found id: ""
	I1217 01:32:07.719164 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:07.719231 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.723681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:07.723755 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:07.756924 1225677 cri.go:89] found id: ""
	I1217 01:32:07.756950 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.756969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:07.756979 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:07.756991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:07.856049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:07.856088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:07.935429 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:07.935456 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:07.935469 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.961013 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:07.961042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:08.005989 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:08.006024 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:08.039061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:08.039092 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:08.058159 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:08.058194 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:08.112456 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:08.112490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:08.176389 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:08.176457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:08.215782 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:08.215809 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:08.244713 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:08.244743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:10.828143 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:10.838717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:10.838793 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:10.869672 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:10.869696 1225677 cri.go:89] found id: ""
	I1217 01:32:10.869705 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:10.869761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.873603 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:10.873720 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:10.900811 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:10.900837 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:10.900843 1225677 cri.go:89] found id: ""
	I1217 01:32:10.900851 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:10.900906 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.904643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.908193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:10.908261 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:10.935598 1225677 cri.go:89] found id: ""
	I1217 01:32:10.935624 1225677 logs.go:282] 0 containers: []
	W1217 01:32:10.935634 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:10.935641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:10.935698 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:10.966869 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:10.966894 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:10.966899 1225677 cri.go:89] found id: ""
	I1217 01:32:10.966907 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:10.966962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.970920 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.974605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:10.974715 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:11.012577 1225677 cri.go:89] found id: ""
	I1217 01:32:11.012602 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.012612 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:11.012618 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:11.012680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:11.048075 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.048100 1225677 cri.go:89] found id: ""
	I1217 01:32:11.048130 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:11.048185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:11.052014 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:11.052089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:11.084486 1225677 cri.go:89] found id: ""
	I1217 01:32:11.084511 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.084524 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:11.084533 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:11.084545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:11.192042 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:11.192076 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:11.218345 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:11.218378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:11.261837 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:11.261869 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:11.321100 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:11.321138 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:11.356360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:11.356390 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:11.433012 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:11.433054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:11.511248 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:11.511270 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:11.511287 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:11.549584 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:11.549614 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:11.596753 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:11.596786 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.626208 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:11.626240 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.173611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:14.187629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:14.187704 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:14.223146 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.223170 1225677 cri.go:89] found id: ""
	I1217 01:32:14.223179 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:14.223264 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.227607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:14.227721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:14.255753 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:14.255791 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.255796 1225677 cri.go:89] found id: ""
	I1217 01:32:14.255804 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:14.255881 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.259963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.263644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:14.263717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:14.290575 1225677 cri.go:89] found id: ""
	I1217 01:32:14.290599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.290614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:14.290621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:14.290681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:14.318287 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.318309 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.318314 1225677 cri.go:89] found id: ""
	I1217 01:32:14.318323 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:14.318378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.322352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.326073 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:14.326157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:14.352179 1225677 cri.go:89] found id: ""
	I1217 01:32:14.352205 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.352214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:14.352221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:14.352304 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:14.380539 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.380565 1225677 cri.go:89] found id: ""
	I1217 01:32:14.380582 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:14.380678 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.385134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:14.385210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:14.417374 1225677 cri.go:89] found id: ""
	I1217 01:32:14.417407 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.417417 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:14.417441 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:14.417457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.464173 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:14.464209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.491958 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:14.492035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.547112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:14.547180 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:14.617502 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:14.617525 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:14.617548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.645669 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:14.645697 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.705027 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:14.705070 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.738615 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:14.738689 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:14.819881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:14.819961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:14.917702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:14.917739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:14.940092 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:14.940127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.482077 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:17.493126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:17.493227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:17.520116 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.520137 1225677 cri.go:89] found id: ""
	I1217 01:32:17.520155 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:17.520234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.524492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:17.524572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:17.553355 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.553419 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:17.553439 1225677 cri.go:89] found id: ""
	I1217 01:32:17.553454 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:17.553512 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.557145 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.560580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:17.560663 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:17.586798 1225677 cri.go:89] found id: ""
	I1217 01:32:17.586824 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.586843 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:17.586850 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:17.586915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:17.614063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.614096 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:17.614102 1225677 cri.go:89] found id: ""
	I1217 01:32:17.614110 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:17.614174 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.618083 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.621593 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:17.621662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:17.652917 1225677 cri.go:89] found id: ""
	I1217 01:32:17.652943 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.652964 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:17.652972 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:17.653029 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:17.679412 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.679435 1225677 cri.go:89] found id: ""
	I1217 01:32:17.679443 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:17.679508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.683530 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:17.683606 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:17.714591 1225677 cri.go:89] found id: ""
	I1217 01:32:17.714618 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.714628 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:17.714638 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:17.714652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.774158 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:17.774193 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.802731 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:17.802759 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:17.837385 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:17.837413 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:17.948723 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:17.948766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:17.967594 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:17.967622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.997257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:17.997350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:18.046163 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:18.046204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:18.075264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:18.075345 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:18.179955 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:18.180007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:18.261983 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:18.262017 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:18.262034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.814850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:20.826637 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:20.826710 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:20.867818 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:20.867839 1225677 cri.go:89] found id: ""
	I1217 01:32:20.867847 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:20.867902 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.871814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:20.871895 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:20.902722 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.902742 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:20.902746 1225677 cri.go:89] found id: ""
	I1217 01:32:20.902755 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:20.902808 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.907236 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.911156 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:20.911230 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:20.937933 1225677 cri.go:89] found id: ""
	I1217 01:32:20.937959 1225677 logs.go:282] 0 containers: []
	W1217 01:32:20.937968 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:20.937974 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:20.938063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:20.965558 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:20.965581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:20.965587 1225677 cri.go:89] found id: ""
	I1217 01:32:20.965595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:20.965652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.969565 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.973428 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:20.973498 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:21.012487 1225677 cri.go:89] found id: ""
	I1217 01:32:21.012512 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.012521 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:21.012527 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:21.012590 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:21.041411 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.041443 1225677 cri.go:89] found id: ""
	I1217 01:32:21.041455 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:21.041515 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:21.045571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:21.045672 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:21.074982 1225677 cri.go:89] found id: ""
	I1217 01:32:21.075005 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.075014 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:21.075023 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:21.075036 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:21.105151 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:21.105181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.131324 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:21.131398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:21.228426 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:21.228461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:21.285988 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:21.286020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:21.369964 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:21.370005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:21.406263 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:21.406295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:21.425680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:21.425710 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:21.503044 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:21.503067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:21.503083 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:21.533119 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:21.533147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:21.584619 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:21.584652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.145239 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:24.156031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:24.156112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:24.191491 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.191515 1225677 cri.go:89] found id: ""
	I1217 01:32:24.191523 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:24.191579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.196271 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:24.196344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:24.229412 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.229433 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.229437 1225677 cri.go:89] found id: ""
	I1217 01:32:24.229445 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:24.229502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.233353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.237055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:24.237137 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:24.264226 1225677 cri.go:89] found id: ""
	I1217 01:32:24.264252 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.264262 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:24.264268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:24.264330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:24.300946 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.300972 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.300977 1225677 cri.go:89] found id: ""
	I1217 01:32:24.300984 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:24.301038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.304900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.308160 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:24.308277 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:24.334573 1225677 cri.go:89] found id: ""
	I1217 01:32:24.334596 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.334606 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:24.334612 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:24.334670 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:24.367769 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.367791 1225677 cri.go:89] found id: ""
	I1217 01:32:24.367800 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:24.367853 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.371482 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:24.371586 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:24.398071 1225677 cri.go:89] found id: ""
	I1217 01:32:24.398095 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.398104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:24.398112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:24.398124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:24.466998 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:24.467073 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:24.467093 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.494797 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:24.494826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.566818 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:24.566859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.627760 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:24.627797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.657250 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:24.657278 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.683514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:24.683549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:24.703093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:24.703129 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.757376 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:24.757411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:24.839791 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:24.839826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:24.883947 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:24.883978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:27.492559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:27.503372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:27.503445 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:27.541590 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:27.541611 1225677 cri.go:89] found id: ""
	I1217 01:32:27.541620 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:27.541675 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.545373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:27.545448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:27.571462 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:27.571486 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:27.571491 1225677 cri.go:89] found id: ""
	I1217 01:32:27.571499 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:27.571555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.575671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.579240 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:27.579332 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:27.612215 1225677 cri.go:89] found id: ""
	I1217 01:32:27.612245 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.612254 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:27.612261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:27.612339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:27.639672 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:27.639696 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.639701 1225677 cri.go:89] found id: ""
	I1217 01:32:27.639708 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:27.639782 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.643953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.647820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:27.647942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:27.673115 1225677 cri.go:89] found id: ""
	I1217 01:32:27.673141 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.673150 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:27.673157 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:27.673215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:27.703404 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.703428 1225677 cri.go:89] found id: ""
	I1217 01:32:27.703437 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:27.703566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.708031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:27.708106 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:27.736748 1225677 cri.go:89] found id: ""
	I1217 01:32:27.736770 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.736779 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:27.736789 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:27.736802 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.763699 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:27.763727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.790990 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:27.791020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:27.871644 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:27.871680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:27.904392 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:27.904499 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:27.926297 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:27.926333 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:28.002149 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:28.002177 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:28.002196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:28.030901 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:28.030933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:28.070431 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:28.070463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:28.124957 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:28.124994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:28.185427 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:28.185465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:30.787761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:30.798953 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:30.799025 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:30.826532 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:30.826561 1225677 cri.go:89] found id: ""
	I1217 01:32:30.826570 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:30.826631 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.830429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:30.830503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:30.856397 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:30.856449 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:30.856462 1225677 cri.go:89] found id: ""
	I1217 01:32:30.856470 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:30.856524 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.860460 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.864121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:30.864204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:30.893119 1225677 cri.go:89] found id: ""
	I1217 01:32:30.893143 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.893153 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:30.893166 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:30.893225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:30.942371 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:30.942393 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:30.942398 1225677 cri.go:89] found id: ""
	I1217 01:32:30.942406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:30.942463 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.947748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.953053 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:30.953140 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:30.991763 1225677 cri.go:89] found id: ""
	I1217 01:32:30.991793 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.991802 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:30.991817 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:30.991888 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:31.026936 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.026958 1225677 cri.go:89] found id: ""
	I1217 01:32:31.026967 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:31.027022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:31.031253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:31.031338 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:31.060606 1225677 cri.go:89] found id: ""
	I1217 01:32:31.060632 1225677 logs.go:282] 0 containers: []
	W1217 01:32:31.060641 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:31.060650 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:31.060666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.089805 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:31.089837 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:31.179774 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:31.179814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:31.231705 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:31.231739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:31.264982 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:31.265014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:31.295319 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:31.295348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:31.398598 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:31.398635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:31.418439 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:31.418473 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:31.505328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:31.505348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:31.505364 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:31.534574 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:31.534604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:31.584571 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:31.584607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.145660 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:34.156555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:34.156680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:34.189334 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.189353 1225677 cri.go:89] found id: ""
	I1217 01:32:34.189361 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:34.189415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.193025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:34.193117 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:34.229137 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.229160 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.229165 1225677 cri.go:89] found id: ""
	I1217 01:32:34.229176 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:34.229234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.232921 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.236260 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:34.236361 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:34.264990 1225677 cri.go:89] found id: ""
	I1217 01:32:34.265013 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.265022 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:34.265028 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:34.265086 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:34.292130 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.292205 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.292225 1225677 cri.go:89] found id: ""
	I1217 01:32:34.292250 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:34.292344 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.295987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.299388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:34.299500 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:34.325943 1225677 cri.go:89] found id: ""
	I1217 01:32:34.326026 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.326042 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:34.326049 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:34.326108 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:34.363328 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.363351 1225677 cri.go:89] found id: ""
	I1217 01:32:34.363361 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:34.363415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.367803 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:34.367878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:34.394984 1225677 cri.go:89] found id: ""
	I1217 01:32:34.395011 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.395020 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:34.395029 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:34.395065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:34.470015 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:34.470036 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:34.470049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.496057 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:34.496091 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.549522 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:34.549555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.592693 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:34.592728 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.652425 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:34.652505 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.680716 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:34.680747 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.707492 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:34.707522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:34.787410 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:34.787492 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:34.892246 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:34.892284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:34.910499 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:34.910530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:37.463203 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:37.474127 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:37.474200 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:37.506946 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.507018 1225677 cri.go:89] found id: ""
	I1217 01:32:37.507042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:37.507123 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.511460 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:37.511535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:37.546992 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:37.547014 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:37.547020 1225677 cri.go:89] found id: ""
	I1217 01:32:37.547028 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:37.547090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.550864 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.554364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:37.554450 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:37.592224 1225677 cri.go:89] found id: ""
	I1217 01:32:37.592353 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.592394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:37.592437 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:37.592579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:37.620557 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.620581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:37.620587 1225677 cri.go:89] found id: ""
	I1217 01:32:37.620595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:37.620691 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.624719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.628465 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:37.628541 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:37.657843 1225677 cri.go:89] found id: ""
	I1217 01:32:37.657870 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.657878 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:37.657885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:37.657955 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:37.686792 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.686825 1225677 cri.go:89] found id: ""
	I1217 01:32:37.686834 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:37.686898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.690651 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:37.690783 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:37.719977 1225677 cri.go:89] found id: ""
	I1217 01:32:37.720000 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.720009 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:37.720018 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:37.720030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:37.738580 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:37.738610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:37.814847 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:37.814869 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:37.814883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.840694 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:37.840723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.901817 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:37.901855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.935757 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:37.935839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:38.014642 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:38.014679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:38.115079 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:38.115123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:38.157390 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:38.157423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:38.204086 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:38.204123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:38.235323 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:38.235355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:40.766175 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:40.777746 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:40.777818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:40.809026 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:40.809051 1225677 cri.go:89] found id: ""
	I1217 01:32:40.809060 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:40.809157 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.813212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:40.813294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:40.840793 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:40.840821 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:40.840826 1225677 cri.go:89] found id: ""
	I1217 01:32:40.840834 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:40.840915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.845018 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.848655 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:40.848732 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:40.875726 1225677 cri.go:89] found id: ""
	I1217 01:32:40.875750 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.875761 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:40.875767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:40.875825 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:40.902504 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:40.902527 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:40.902532 1225677 cri.go:89] found id: ""
	I1217 01:32:40.902540 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:40.902593 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.906394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.910259 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:40.910330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:40.936570 1225677 cri.go:89] found id: ""
	I1217 01:32:40.936599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.936609 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:40.936616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:40.936676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:40.964358 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:40.964381 1225677 cri.go:89] found id: ""
	I1217 01:32:40.964389 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:40.964541 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.968221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:40.968292 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:40.998606 1225677 cri.go:89] found id: ""
	I1217 01:32:40.998633 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.998644 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:40.998654 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:40.998668 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:41.022520 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:41.022551 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:41.051598 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:41.051625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:41.091115 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:41.091148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:41.159179 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:41.159223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:41.190970 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:41.190997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:41.225786 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:41.225815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:41.294484 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:41.294509 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:41.294523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:41.346979 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:41.347017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:41.374095 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:41.374126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:41.456622 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:41.456658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.066375 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:44.077293 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:44.077365 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:44.104332 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.104476 1225677 cri.go:89] found id: ""
	I1217 01:32:44.104504 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:44.104580 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.108715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:44.108799 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:44.140649 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.140672 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.140677 1225677 cri.go:89] found id: ""
	I1217 01:32:44.140684 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:44.140763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.144834 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.148730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:44.148811 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:44.197233 1225677 cri.go:89] found id: ""
	I1217 01:32:44.197259 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.197268 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:44.197274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:44.197350 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:44.240339 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:44.240363 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.240368 1225677 cri.go:89] found id: ""
	I1217 01:32:44.240376 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:44.240456 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.244962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.248793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:44.248913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:44.278464 1225677 cri.go:89] found id: ""
	I1217 01:32:44.278491 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.278501 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:44.278507 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:44.278585 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:44.308914 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.308938 1225677 cri.go:89] found id: ""
	I1217 01:32:44.308958 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:44.309048 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.313878 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:44.313951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:44.344530 1225677 cri.go:89] found id: ""
	I1217 01:32:44.344555 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.344577 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:44.344588 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:44.344600 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.372833 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:44.372864 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:44.452952 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:44.452990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:44.474609 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:44.474642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:44.552482 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:44.552507 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:44.552521 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.580322 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:44.580352 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.610292 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:44.610320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:44.643236 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:44.643266 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.755542 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:44.755601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.808715 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:44.808771 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.856301 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:44.856338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.419847 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:47.431877 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:47.431951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:47.461659 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:47.461682 1225677 cri.go:89] found id: ""
	I1217 01:32:47.461690 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:47.461747 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.465698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:47.465822 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:47.495157 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.495179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.495184 1225677 cri.go:89] found id: ""
	I1217 01:32:47.495192 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:47.495247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.499337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.503995 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:47.504080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:47.543135 1225677 cri.go:89] found id: ""
	I1217 01:32:47.543158 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.543167 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:47.543174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:47.543238 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:47.572765 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.572791 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:47.572797 1225677 cri.go:89] found id: ""
	I1217 01:32:47.572804 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:47.572867 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.577796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.581659 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:47.581760 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:47.612595 1225677 cri.go:89] found id: ""
	I1217 01:32:47.612660 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.612674 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:47.612681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:47.612744 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:47.642199 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:47.642223 1225677 cri.go:89] found id: ""
	I1217 01:32:47.642231 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:47.642287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.646215 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:47.646285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:47.672805 1225677 cri.go:89] found id: ""
	I1217 01:32:47.672830 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.672839 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:47.672849 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:47.672859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:47.702885 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:47.702917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:47.723284 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:47.723318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:47.799644 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:47.799674 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:47.799688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.839852 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:47.839884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.888519 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:47.888557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:47.973305 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:47.973344 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:48.081814 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:48.081853 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:48.114561 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:48.114590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:48.208193 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:48.208234 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:48.241262 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:48.241293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.770940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:50.781882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:50.781951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:50.809569 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:50.809595 1225677 cri.go:89] found id: ""
	I1217 01:32:50.809604 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:50.809665 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.814519 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:50.814594 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:50.849443 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:50.849472 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:50.849478 1225677 cri.go:89] found id: ""
	I1217 01:32:50.849486 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:50.849564 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.853510 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.857119 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:50.857224 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:50.888246 1225677 cri.go:89] found id: ""
	I1217 01:32:50.888275 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.888284 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:50.888291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:50.888351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:50.916294 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:50.916320 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:50.916326 1225677 cri.go:89] found id: ""
	I1217 01:32:50.916333 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:50.916388 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.920299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.924658 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:50.924730 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:50.957966 1225677 cri.go:89] found id: ""
	I1217 01:32:50.957994 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.958003 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:50.958009 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:50.958069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:50.991282 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.991304 1225677 cri.go:89] found id: ""
	I1217 01:32:50.991312 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:50.991377 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.995730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:50.995797 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:51.034122 1225677 cri.go:89] found id: ""
	I1217 01:32:51.034199 1225677 logs.go:282] 0 containers: []
	W1217 01:32:51.034238 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:51.034266 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:51.034295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:51.062022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:51.062100 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:51.081698 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:51.081733 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:51.112382 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:51.112482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:51.172152 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:51.172190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:51.213603 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:51.213634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:51.297400 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:51.297439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:51.331335 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:51.331412 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:51.426253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:51.426289 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:51.499310 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:51.499332 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:51.499348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:51.572760 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:51.572795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.122214 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:54.133644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:54.133721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:54.162887 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.162912 1225677 cri.go:89] found id: ""
	I1217 01:32:54.162922 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:54.162978 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.167057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:54.167127 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:54.205900 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.205920 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.205925 1225677 cri.go:89] found id: ""
	I1217 01:32:54.205932 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:54.205987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.210350 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.214343 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:54.214419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:54.246321 1225677 cri.go:89] found id: ""
	I1217 01:32:54.246348 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.246357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:54.246364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:54.246424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:54.276281 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.276305 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.276310 1225677 cri.go:89] found id: ""
	I1217 01:32:54.276319 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:54.276379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.281009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.285204 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:54.285281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:54.311149 1225677 cri.go:89] found id: ""
	I1217 01:32:54.311225 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.311251 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:54.311268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:54.311342 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:54.339737 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.339763 1225677 cri.go:89] found id: ""
	I1217 01:32:54.339771 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:54.339825 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.343615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:54.343749 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:54.370945 1225677 cri.go:89] found id: ""
	I1217 01:32:54.370971 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.370981 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:54.370991 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:54.371003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:54.390464 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:54.390495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:54.470328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:54.470363 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:54.470377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.495970 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:54.495999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.557300 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:54.557336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.585791 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:54.585821 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.612126 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:54.612152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:54.653218 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:54.653246 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:54.752385 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:54.752432 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.814139 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:54.814175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.885191 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:54.885226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:57.468539 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:57.479841 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:57.479913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:57.511032 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.511058 1225677 cri.go:89] found id: ""
	I1217 01:32:57.511067 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:57.511130 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.515373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:57.515446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:57.558508 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.558531 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:57.558537 1225677 cri.go:89] found id: ""
	I1217 01:32:57.558550 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:57.558622 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.563150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.567245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:57.567322 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:57.594294 1225677 cri.go:89] found id: ""
	I1217 01:32:57.594330 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.594341 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:57.594347 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:57.594411 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:57.626077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:57.626100 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.626106 1225677 cri.go:89] found id: ""
	I1217 01:32:57.626114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:57.626173 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.630289 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.634055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:57.634130 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:57.661683 1225677 cri.go:89] found id: ""
	I1217 01:32:57.661711 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.661721 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:57.661727 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:57.661785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:57.690521 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.690556 1225677 cri.go:89] found id: ""
	I1217 01:32:57.690565 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:57.690632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.694587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:57.694687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:57.721760 1225677 cri.go:89] found id: ""
	I1217 01:32:57.721783 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.721792 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:57.721801 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:57.721830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.749279 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:57.749308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.781988 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:57.782017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:57.820059 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:57.820089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:57.841084 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:57.841121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.884653 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:57.884752 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.932570 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:57.932605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:58.015607 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:58.015649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:58.116442 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:58.116479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:58.205896 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:58.205921 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:58.205934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:58.252524 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:58.252595 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.831933 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:00.843915 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:00.844011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:00.872994 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:00.873018 1225677 cri.go:89] found id: ""
	I1217 01:33:00.873027 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:00.873080 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.876819 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:00.876914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:00.904306 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:00.904329 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:00.904334 1225677 cri.go:89] found id: ""
	I1217 01:33:00.904342 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:00.904397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.908029 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.911563 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:00.911642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:00.940652 1225677 cri.go:89] found id: ""
	I1217 01:33:00.940678 1225677 logs.go:282] 0 containers: []
	W1217 01:33:00.940687 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:00.940694 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:00.940752 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:00.967462 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.967503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:00.967514 1225677 cri.go:89] found id: ""
	I1217 01:33:00.967522 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:00.967601 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.971689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.976107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:00.976187 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:01.015150 1225677 cri.go:89] found id: ""
	I1217 01:33:01.015230 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.015253 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:01.015273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:01.015366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:01.044488 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.044553 1225677 cri.go:89] found id: ""
	I1217 01:33:01.044578 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:01.044671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:01.048372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:01.048523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:01.083014 1225677 cri.go:89] found id: ""
	I1217 01:33:01.083096 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.083121 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:01.083173 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:01.083208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:01.181547 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:01.181588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:01.202930 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:01.202966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:01.255543 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:01.255580 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:01.282899 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:01.282927 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.310357 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:01.310387 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:01.361428 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:01.361458 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:01.439491 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:01.439564 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:01.439594 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:01.466548 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:01.466575 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:01.524293 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:01.524332 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:01.603276 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:01.603314 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.194004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:04.206859 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:04.206931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:04.245597 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.245621 1225677 cri.go:89] found id: ""
	I1217 01:33:04.245630 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:04.245688 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.249418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:04.249489 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:04.278257 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.278277 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.278284 1225677 cri.go:89] found id: ""
	I1217 01:33:04.278291 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:04.278405 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.282613 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.286801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:04.286878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:04.313756 1225677 cri.go:89] found id: ""
	I1217 01:33:04.313825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.313852 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:04.313866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:04.313946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:04.343505 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.343528 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.343533 1225677 cri.go:89] found id: ""
	I1217 01:33:04.343542 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:04.343595 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.347432 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.351245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:04.351318 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:04.378415 1225677 cri.go:89] found id: ""
	I1217 01:33:04.378443 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.378453 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:04.378461 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:04.378523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:04.404603 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.404635 1225677 cri.go:89] found id: ""
	I1217 01:33:04.404645 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:04.404699 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.408372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:04.408490 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:04.435025 1225677 cri.go:89] found id: ""
	I1217 01:33:04.435053 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.435063 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:04.435072 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:04.435084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:04.453398 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:04.453431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:04.532185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:04.532207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:04.532220 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.565093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:04.565122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.608097 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:04.608141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.669592 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:04.669635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.698199 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:04.698230 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.781891 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:04.781933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:04.889443 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:04.889483 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.935503 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:04.935540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.962255 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:04.962288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.497519 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:07.509544 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:07.509619 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:07.541912 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.541930 1225677 cri.go:89] found id: ""
	I1217 01:33:07.541938 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:07.541998 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.545880 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:07.545967 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:07.576061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.576085 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:07.576090 1225677 cri.go:89] found id: ""
	I1217 01:33:07.576098 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:07.576156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.580118 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.584118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:07.584216 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:07.613260 1225677 cri.go:89] found id: ""
	I1217 01:33:07.613288 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.613297 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:07.613304 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:07.613390 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:07.643089 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:07.643113 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:07.643118 1225677 cri.go:89] found id: ""
	I1217 01:33:07.643126 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:07.643181 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.646892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.650360 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:07.650433 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:07.677367 1225677 cri.go:89] found id: ""
	I1217 01:33:07.677393 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.677403 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:07.677409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:07.677515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:07.705475 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.705499 1225677 cri.go:89] found id: ""
	I1217 01:33:07.705508 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:07.705588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.709429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:07.709538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:07.737814 1225677 cri.go:89] found id: ""
	I1217 01:33:07.737838 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.737846 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:07.737855 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:07.737867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.767138 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:07.767166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.800084 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:07.800165 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:07.820093 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:07.820124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:07.887706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:07.887729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:07.887744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.915091 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:07.915122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.956054 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:07.956116 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:08.019066 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:08.019105 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:08.080377 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:08.080423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:08.124710 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:08.124793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:08.214495 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:08.214593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:10.827104 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:10.838284 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:10.838422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:10.874165 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:10.874184 1225677 cri.go:89] found id: ""
	I1217 01:33:10.874192 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:10.874245 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.878108 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:10.878180 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:10.903766 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:10.903789 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:10.903794 1225677 cri.go:89] found id: ""
	I1217 01:33:10.903802 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:10.903857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.907574 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.911142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:10.911214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:10.938246 1225677 cri.go:89] found id: ""
	I1217 01:33:10.938273 1225677 logs.go:282] 0 containers: []
	W1217 01:33:10.938283 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:10.938289 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:10.938347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:10.964843 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:10.964866 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:10.964871 1225677 cri.go:89] found id: ""
	I1217 01:33:10.964879 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:10.964935 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.968730 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.972392 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:10.972503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:11.008562 1225677 cri.go:89] found id: ""
	I1217 01:33:11.008590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.008600 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:11.008607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:11.008716 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:11.041307 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.041342 1225677 cri.go:89] found id: ""
	I1217 01:33:11.041352 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:11.041408 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:11.045319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:11.045394 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:11.072727 1225677 cri.go:89] found id: ""
	I1217 01:33:11.072757 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.072771 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:11.072781 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:11.072793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:11.092411 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:11.092531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:11.173959 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:11.173986 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:11.174000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:11.204098 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:11.204130 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:11.265126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:11.265169 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:11.329309 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:11.329350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:11.366487 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:11.366516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:11.449439 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:11.449474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:11.493614 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:11.493648 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.530111 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:11.530142 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:11.573692 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:11.573724 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.175120 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:14.187102 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:14.187212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:14.217900 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.217923 1225677 cri.go:89] found id: ""
	I1217 01:33:14.217933 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:14.217993 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.228556 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:14.228632 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:14.256615 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.256694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.256731 1225677 cri.go:89] found id: ""
	I1217 01:33:14.256747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:14.256855 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.260873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.264886 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:14.264982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:14.293944 1225677 cri.go:89] found id: ""
	I1217 01:33:14.294012 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.294036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:14.294057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:14.294149 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:14.322566 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.322586 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.322591 1225677 cri.go:89] found id: ""
	I1217 01:33:14.322599 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:14.322693 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.326575 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.330162 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:14.330237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:14.356466 1225677 cri.go:89] found id: ""
	I1217 01:33:14.356491 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.356500 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:14.356506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:14.356566 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:14.386031 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:14.386055 1225677 cri.go:89] found id: ""
	I1217 01:33:14.386064 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:14.386142 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.390030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:14.390110 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:14.416257 1225677 cri.go:89] found id: ""
	I1217 01:33:14.416284 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.416293 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:14.416303 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:14.416317 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.511192 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:14.511232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:14.604109 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:14.604132 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:14.604148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.656861 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:14.656895 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.685614 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:14.685642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:14.764169 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:14.764208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:14.812699 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:14.812730 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:14.831513 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:14.831547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.858309 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:14.858339 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.909041 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:14.909072 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.975681 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:14.975723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.515279 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:17.540730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:17.540806 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:17.570081 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:17.570102 1225677 cri.go:89] found id: ""
	I1217 01:33:17.570110 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:17.570178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.574399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:17.574471 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:17.599589 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:17.599610 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:17.599614 1225677 cri.go:89] found id: ""
	I1217 01:33:17.599622 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:17.599689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.604570 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.608574 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:17.608645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:17.635229 1225677 cri.go:89] found id: ""
	I1217 01:33:17.635306 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.635329 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:17.635350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:17.635422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:17.668964 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:17.669003 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.669009 1225677 cri.go:89] found id: ""
	I1217 01:33:17.669017 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:17.669103 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.673057 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.677753 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:17.677826 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:17.707206 1225677 cri.go:89] found id: ""
	I1217 01:33:17.707245 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.707255 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:17.707261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:17.707325 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:17.740289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.740313 1225677 cri.go:89] found id: ""
	I1217 01:33:17.740322 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:17.740385 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.744409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:17.744515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:17.771770 1225677 cri.go:89] found id: ""
	I1217 01:33:17.771797 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.771806 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:17.771815 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:17.771828 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.800155 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:17.800190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:17.882443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:17.882481 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:17.935750 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:17.935781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:17.954392 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:17.954425 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:18.031535 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:18.031568 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:18.031585 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:18.079987 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:18.080029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:18.108390 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:18.108454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:18.206148 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:18.206190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:18.238865 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:18.238894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:18.280200 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:18.280236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.844541 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:20.855183 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:20.855255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:20.883645 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:20.883666 1225677 cri.go:89] found id: ""
	I1217 01:33:20.883673 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:20.883731 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.888021 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:20.888094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:20.917299 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:20.917325 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:20.917330 1225677 cri.go:89] found id: ""
	I1217 01:33:20.917338 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:20.917397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.921256 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.925997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:20.926069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:20.952872 1225677 cri.go:89] found id: ""
	I1217 01:33:20.952898 1225677 logs.go:282] 0 containers: []
	W1217 01:33:20.952907 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:20.952913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:20.952970 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:20.979961 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.979983 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:20.979989 1225677 cri.go:89] found id: ""
	I1217 01:33:20.979998 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:20.980064 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.984302 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.989098 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:20.989171 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:21.023299 1225677 cri.go:89] found id: ""
	I1217 01:33:21.023365 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.023382 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:21.023390 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:21.023454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:21.052742 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.052763 1225677 cri.go:89] found id: ""
	I1217 01:33:21.052773 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:21.052830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:21.056774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:21.056847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:21.086360 1225677 cri.go:89] found id: ""
	I1217 01:33:21.086382 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.086391 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:21.086399 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:21.086411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:21.114471 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:21.114500 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:21.213416 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:21.213451 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:21.294188 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:21.294212 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:21.294253 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:21.321989 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:21.322022 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:21.361898 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:21.361940 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:21.415113 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:21.415151 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.443169 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:21.443202 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:21.538356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:21.538403 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:21.584226 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:21.584255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:21.602588 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:21.602625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.196991 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:24.207442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:24.207518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:24.243683 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.243708 1225677 cri.go:89] found id: ""
	I1217 01:33:24.243717 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:24.243772 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.247370 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:24.247444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:24.274124 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.274153 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.274159 1225677 cri.go:89] found id: ""
	I1217 01:33:24.274167 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:24.274224 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.277936 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.281546 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:24.281628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:24.310864 1225677 cri.go:89] found id: ""
	I1217 01:33:24.310893 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.310903 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:24.310910 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:24.310968 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:24.342620 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.342643 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.342648 1225677 cri.go:89] found id: ""
	I1217 01:33:24.342656 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:24.342714 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.346873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.350690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:24.350776 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:24.378447 1225677 cri.go:89] found id: ""
	I1217 01:33:24.378476 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.378486 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:24.378510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:24.378592 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:24.410097 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.410122 1225677 cri.go:89] found id: ""
	I1217 01:33:24.410132 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:24.410193 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.414020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:24.414094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:24.440741 1225677 cri.go:89] found id: ""
	I1217 01:33:24.440825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.440851 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:24.440879 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:24.440912 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:24.460132 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:24.460163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.493812 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:24.493842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.536741 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:24.536777 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.597219 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:24.597260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.663765 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:24.663805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.703808 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:24.703840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:24.784250 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:24.784288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:24.883741 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:24.883779 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:24.962818 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:24.962842 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:24.962856 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.994828 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:24.994858 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:27.546732 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:27.564740 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:27.564805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:27.608525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:27.608549 1225677 cri.go:89] found id: ""
	I1217 01:33:27.608558 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:27.608611 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.613062 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:27.613135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:27.659805 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:27.659827 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:27.659831 1225677 cri.go:89] found id: ""
	I1217 01:33:27.659838 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:27.659896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.664210 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.668351 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:27.668446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:27.704696 1225677 cri.go:89] found id: ""
	I1217 01:33:27.704771 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.704794 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:27.704815 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:27.704898 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:27.738798 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:27.738821 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:27.738827 1225677 cri.go:89] found id: ""
	I1217 01:33:27.738834 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:27.738896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.743026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.746985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:27.747059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:27.785087 1225677 cri.go:89] found id: ""
	I1217 01:33:27.785111 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.785119 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:27.785126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:27.785192 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:27.818270 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:27.818289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:27.818293 1225677 cri.go:89] found id: ""
	I1217 01:33:27.818300 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:27.818356 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.822652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.826638 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:27.826695 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:27.865573 1225677 cri.go:89] found id: ""
	I1217 01:33:27.865604 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.865613 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:27.865623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:27.865634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:27.972193 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:27.972232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:28.056562 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:28.056589 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:28.056605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:28.085398 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:28.085429 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:28.132214 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:28.132252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:28.174271 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:28.174303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:28.273045 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:28.273082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:28.321799 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:28.321880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:28.342146 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:28.342292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:28.406933 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:28.407120 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:28.498600 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:28.498680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:28.534124 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:28.534150 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.091052 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:31.103205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:31.103279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:31.140533 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.140556 1225677 cri.go:89] found id: ""
	I1217 01:33:31.140564 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:31.140627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.145121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:31.145202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:31.175735 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.175761 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.175768 1225677 cri.go:89] found id: ""
	I1217 01:33:31.175775 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:31.175832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.180026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.184555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:31.184628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:31.213074 1225677 cri.go:89] found id: ""
	I1217 01:33:31.213100 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.213110 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:31.213117 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:31.213174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:31.251260 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.251286 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.251291 1225677 cri.go:89] found id: ""
	I1217 01:33:31.251299 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:31.251354 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.255625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.259649 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:31.259726 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:31.287030 1225677 cri.go:89] found id: ""
	I1217 01:33:31.287056 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.287065 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:31.287072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:31.287128 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:31.314782 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.314851 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.314876 1225677 cri.go:89] found id: ""
	I1217 01:33:31.314902 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:31.314984 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.320071 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.324354 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:31.324534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:31.357412 1225677 cri.go:89] found id: ""
	I1217 01:33:31.357439 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.357449 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:31.357464 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:31.357480 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:31.462967 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:31.463006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:31.482965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:31.482995 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:31.552928 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:31.552952 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:31.552966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.579435 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:31.579470 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.619907 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:31.619945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.687595 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:31.687636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.720143 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:31.720175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.746106 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:31.746135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.812096 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:31.812131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.841610 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:31.841646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:31.920159 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:31.920197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:34.457713 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:34.469492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:34.469574 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:34.497755 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:34.497777 1225677 cri.go:89] found id: ""
	I1217 01:33:34.497786 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:34.497850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.501620 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:34.501703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:34.532206 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:34.532227 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:34.532231 1225677 cri.go:89] found id: ""
	I1217 01:33:34.532238 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:34.532299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.537376 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.541069 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:34.541142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:34.577690 1225677 cri.go:89] found id: ""
	I1217 01:33:34.577730 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.577740 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:34.577763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:34.577844 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:34.606156 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.606176 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:34.606180 1225677 cri.go:89] found id: ""
	I1217 01:33:34.606188 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:34.606243 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.610716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.614894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:34.614990 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:34.644563 1225677 cri.go:89] found id: ""
	I1217 01:33:34.644590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.644599 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:34.644605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:34.644685 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:34.673641 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:34.673666 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:34.673671 1225677 cri.go:89] found id: ""
	I1217 01:33:34.673679 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:34.673737 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.677531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.681295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:34.681370 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:34.708990 1225677 cri.go:89] found id: ""
	I1217 01:33:34.709071 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.709088 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:34.709099 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:34.709111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:34.809701 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:34.809785 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:34.828178 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:34.828210 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:34.903131 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:34.903155 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:34.903168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.971266 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:34.971304 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:35.004179 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:35.004215 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:35.041784 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:35.041815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:35.067541 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:35.067571 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:35.126841 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:35.126874 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:35.172191 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:35.172226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:35.200255 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:35.200295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:35.239991 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:35.240030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:37.824762 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:37.835623 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:37.835693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:37.865989 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:37.866008 1225677 cri.go:89] found id: ""
	I1217 01:33:37.866018 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:37.866073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.869857 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:37.869946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:37.898865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:37.898940 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:37.898960 1225677 cri.go:89] found id: ""
	I1217 01:33:37.898986 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:37.899093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.903232 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.907211 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:37.907281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:37.939280 1225677 cri.go:89] found id: ""
	I1217 01:33:37.939302 1225677 logs.go:282] 0 containers: []
	W1217 01:33:37.939311 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:37.939318 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:37.939379 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:37.967924 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:37.967945 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:37.967949 1225677 cri.go:89] found id: ""
	I1217 01:33:37.967957 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:37.968032 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.971797 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.975432 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:37.975510 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:38.007766 1225677 cri.go:89] found id: ""
	I1217 01:33:38.007790 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.007798 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:38.007805 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:38.007864 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:38.037473 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.037495 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.037503 1225677 cri.go:89] found id: ""
	I1217 01:33:38.037511 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:38.037566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.041569 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.045417 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:38.045524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:38.073829 1225677 cri.go:89] found id: ""
	I1217 01:33:38.073851 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.073860 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:38.073870 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:38.073882 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:38.093728 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:38.093764 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:38.176670 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:38.176690 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:38.176703 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:38.211414 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:38.211443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:38.263725 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:38.263761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:38.309151 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:38.309186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:38.338107 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:38.338143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.369538 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:38.369566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:38.449918 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:38.449954 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:38.542249 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:38.542288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:38.612539 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:38.612617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.642932 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:38.643015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:41.175028 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:41.186849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:41.186921 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:41.230880 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.230955 1225677 cri.go:89] found id: ""
	I1217 01:33:41.230992 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:41.231084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.235480 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:41.235641 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:41.266906 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.266980 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.267014 1225677 cri.go:89] found id: ""
	I1217 01:33:41.267040 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:41.267127 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.271136 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.275105 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:41.275225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:41.306499 1225677 cri.go:89] found id: ""
	I1217 01:33:41.306580 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.306603 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:41.306624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:41.306737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:41.333549 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.333575 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.333580 1225677 cri.go:89] found id: ""
	I1217 01:33:41.333589 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:41.333643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.337497 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.341450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:41.341531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:41.368976 1225677 cri.go:89] found id: ""
	I1217 01:33:41.369004 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.369014 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:41.369020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:41.369082 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:41.397520 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.397583 1225677 cri.go:89] found id: ""
	I1217 01:33:41.397607 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:41.397684 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.401528 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:41.401607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:41.427395 1225677 cri.go:89] found id: ""
	I1217 01:33:41.427423 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.427434 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:41.427444 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:41.427463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:41.525514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:41.525559 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:41.551264 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:41.551299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:41.625083 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:41.625123 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:41.625147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.702454 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:41.702490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.735107 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:41.735134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.769228 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:41.769269 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.799696 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:41.799725 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.848171 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:41.848207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.933395 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:41.933446 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:42.025408 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:42.025452 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:44.562646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:44.573393 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:44.573486 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:44.600868 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:44.600895 1225677 cri.go:89] found id: ""
	I1217 01:33:44.600906 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:44.600983 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.604710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:44.604780 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:44.632082 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:44.632158 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:44.632187 1225677 cri.go:89] found id: ""
	I1217 01:33:44.632208 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:44.632294 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.636315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.640212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:44.640285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:44.669382 1225677 cri.go:89] found id: ""
	I1217 01:33:44.669404 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.669413 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:44.669419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:44.669480 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:44.699713 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:44.699732 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.699737 1225677 cri.go:89] found id: ""
	I1217 01:33:44.699747 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:44.699801 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.703608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.707118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:44.707191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:44.733881 1225677 cri.go:89] found id: ""
	I1217 01:33:44.733905 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.733914 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:44.733921 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:44.733983 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:44.761418 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:44.761440 1225677 cri.go:89] found id: ""
	I1217 01:33:44.761449 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:44.761507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.765368 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:44.765451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:44.797562 1225677 cri.go:89] found id: ""
	I1217 01:33:44.797587 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.797595 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:44.797605 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:44.797617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.824683 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:44.824716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:44.935133 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:44.935177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:44.954652 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:44.954684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:45.015678 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:45.015775 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:45.189553 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:45.191524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:45.273264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:45.273306 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:45.371974 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:45.372013 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:45.409119 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:45.409149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:45.483606 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:45.483631 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:45.483645 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:45.511796 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:45.511826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.069605 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:48.081402 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:48.081501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:48.113467 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.113487 1225677 cri.go:89] found id: ""
	I1217 01:33:48.113496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:48.113554 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.123702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:48.123830 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:48.152225 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.152299 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.152320 1225677 cri.go:89] found id: ""
	I1217 01:33:48.152346 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:48.152452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.156596 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.160848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:48.160930 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:48.192903 1225677 cri.go:89] found id: ""
	I1217 01:33:48.192934 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.192944 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:48.192951 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:48.193016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:48.223459 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.223483 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.223489 1225677 cri.go:89] found id: ""
	I1217 01:33:48.223496 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:48.223577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.228708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.233033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:48.233131 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:48.264313 1225677 cri.go:89] found id: ""
	I1217 01:33:48.264339 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.264348 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:48.264355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:48.264430 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:48.292891 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.292963 1225677 cri.go:89] found id: ""
	I1217 01:33:48.292986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:48.293068 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.297013 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:48.297089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:48.324697 1225677 cri.go:89] found id: ""
	I1217 01:33:48.324724 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.324734 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:48.324743 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:48.324755 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:48.343285 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:48.343318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.401079 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:48.401121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.445651 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:48.445685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.487906 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:48.487936 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.520261 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:48.520288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:48.612095 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:48.612132 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:48.686505 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:48.686528 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:48.686545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.715518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:48.715549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.780723 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:48.780758 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:48.813883 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:48.813910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.424534 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:51.435019 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:51.435089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:51.461515 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.461539 1225677 cri.go:89] found id: ""
	I1217 01:33:51.461549 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:51.461610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.465697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:51.465778 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:51.494232 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.494254 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:51.494260 1225677 cri.go:89] found id: ""
	I1217 01:33:51.494267 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:51.494342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.498178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.501847 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:51.501920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:51.533242 1225677 cri.go:89] found id: ""
	I1217 01:33:51.533267 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.533277 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:51.533283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:51.533356 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:51.559915 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.559937 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:51.559942 1225677 cri.go:89] found id: ""
	I1217 01:33:51.559950 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:51.560017 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.563739 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.567426 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:51.567506 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:51.598933 1225677 cri.go:89] found id: ""
	I1217 01:33:51.598958 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.598978 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:51.598985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:51.599043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:51.628013 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:51.628085 1225677 cri.go:89] found id: ""
	I1217 01:33:51.628107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:51.628195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.632081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:51.632153 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:51.664059 1225677 cri.go:89] found id: ""
	I1217 01:33:51.664095 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.664104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:51.664114 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:51.664127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.703117 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:51.703141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.746864 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:51.746901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.813259 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:51.813294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:51.890408 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:51.890448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.996243 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:51.996281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:52.078355 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:52.078385 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:52.078399 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:52.124157 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:52.124201 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:52.158325 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:52.158406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:52.194882 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:52.194917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:52.236180 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:52.236223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:54.755766 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:54.766584 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:54.766659 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:54.794813 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:54.794834 1225677 cri.go:89] found id: ""
	I1217 01:33:54.794844 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:54.794900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.798697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:54.798816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:54.830345 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:54.830368 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:54.830374 1225677 cri.go:89] found id: ""
	I1217 01:33:54.830381 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:54.830437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.834212 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.837869 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:54.837958 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:54.865687 1225677 cri.go:89] found id: ""
	I1217 01:33:54.865710 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.865720 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:54.865726 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:54.865784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:54.893199 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:54.893222 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:54.893228 1225677 cri.go:89] found id: ""
	I1217 01:33:54.893236 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:54.893300 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.897296 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.901035 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:54.901109 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:54.935123 1225677 cri.go:89] found id: ""
	I1217 01:33:54.935150 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.935160 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:54.935165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:54.935227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:54.960828 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:54.960908 1225677 cri.go:89] found id: ""
	I1217 01:33:54.960925 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:54.960994 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.965788 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:54.965858 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:54.996816 1225677 cri.go:89] found id: ""
	I1217 01:33:54.996844 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.996854 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:54.996864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:54.996877 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:55.049187 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:55.049226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:55.122184 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:55.122224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:55.149525 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:55.149555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:55.259828 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:55.259866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:55.286876 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:55.286905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:55.332115 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:55.332149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:55.359308 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:55.359340 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:55.444861 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:55.444901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:55.492994 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:55.493026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:55.512281 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:55.512312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:55.587576 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.089262 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:58.101573 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:58.101658 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:58.137991 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:58.138015 1225677 cri.go:89] found id: ""
	I1217 01:33:58.138024 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:58.138084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.142504 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:58.142579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:58.172313 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.172337 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.172343 1225677 cri.go:89] found id: ""
	I1217 01:33:58.172350 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:58.172446 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.176396 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.180282 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:58.180366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:58.211138 1225677 cri.go:89] found id: ""
	I1217 01:33:58.211171 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.211181 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:58.211193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:58.211257 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:58.243736 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.243759 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.243764 1225677 cri.go:89] found id: ""
	I1217 01:33:58.243773 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:58.243830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.247791 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.251576 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:58.251655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:58.288139 1225677 cri.go:89] found id: ""
	I1217 01:33:58.288173 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.288184 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:58.288193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:58.288255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:58.317667 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.317690 1225677 cri.go:89] found id: ""
	I1217 01:33:58.317700 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:58.317763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.321820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:58.321906 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:58.350850 1225677 cri.go:89] found id: ""
	I1217 01:33:58.350878 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.350888 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:58.350897 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:58.350910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.416830 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:58.416867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.444837 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:58.444868 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:58.528215 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:58.528263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:58.575846 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:58.575880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:58.595772 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:58.595807 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.650340 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:58.650375 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.701278 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:58.701316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.732779 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:58.732810 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:58.835274 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:58.835310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:58.910122 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.910207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:58.910236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.438103 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:01.448838 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:01.448920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:01.479627 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.479651 1225677 cri.go:89] found id: ""
	I1217 01:34:01.479678 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:01.479736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.483564 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:01.483634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:01.510339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.510364 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.510370 1225677 cri.go:89] found id: ""
	I1217 01:34:01.510378 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:01.510435 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.514437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.519025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:01.519139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:01.547434 1225677 cri.go:89] found id: ""
	I1217 01:34:01.547457 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.547466 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:01.547473 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:01.547530 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:01.574487 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.574508 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.574513 1225677 cri.go:89] found id: ""
	I1217 01:34:01.574520 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:01.574577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.578139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.581545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:01.581626 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:01.609342 1225677 cri.go:89] found id: ""
	I1217 01:34:01.609365 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.609374 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:01.609381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:01.609439 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:01.636506 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:01.636530 1225677 cri.go:89] found id: ""
	I1217 01:34:01.636540 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:01.636602 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.640274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:01.640388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:01.669875 1225677 cri.go:89] found id: ""
	I1217 01:34:01.669944 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.669969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:01.669993 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:01.670033 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.710653 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:01.710691 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.763990 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:01.764028 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.833068 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:01.833107 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.863940 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:01.864023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:01.967213 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:01.967254 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:01.992938 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:01.992972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:02.024381 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:02.024443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:02.106857 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:02.106896 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:02.143612 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:02.143646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:02.213706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:02.213729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:02.213742 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.741826 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:04.752958 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:04.753026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:04.783743 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.783762 1225677 cri.go:89] found id: ""
	I1217 01:34:04.783770 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:04.784150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.788287 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:04.788359 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:04.817040 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:04.817073 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:04.817079 1225677 cri.go:89] found id: ""
	I1217 01:34:04.817086 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:04.817147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.821094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.825495 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:04.825571 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:04.853100 1225677 cri.go:89] found id: ""
	I1217 01:34:04.853124 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.853133 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:04.853140 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:04.853202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:04.881403 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:04.881425 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:04.881430 1225677 cri.go:89] found id: ""
	I1217 01:34:04.881438 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:04.881502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.885516 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.889230 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:04.889353 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:04.915187 1225677 cri.go:89] found id: ""
	I1217 01:34:04.915219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.915229 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:04.915235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:04.915296 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:04.946769 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:04.946802 1225677 cri.go:89] found id: ""
	I1217 01:34:04.946811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:04.946884 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.951231 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:04.951339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:04.978082 1225677 cri.go:89] found id: ""
	I1217 01:34:04.978110 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.978120 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:04.978128 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:04.978166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:05.019076 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:05.019109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:05.101083 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:05.101161 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:05.177848 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:05.177870 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:05.177884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:05.204143 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:05.204172 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:05.268231 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:05.268268 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:05.297025 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:05.297054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:05.327881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:05.327911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:05.437319 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:05.437360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:05.456847 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:05.456883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:05.498209 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:05.498242 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.077748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:08.088818 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:08.088890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:08.126181 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.126213 1225677 cri.go:89] found id: ""
	I1217 01:34:08.126227 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:08.126292 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.131226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:08.131346 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:08.160808 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.160832 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.160837 1225677 cri.go:89] found id: ""
	I1217 01:34:08.160846 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:08.160923 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.166045 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.170405 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:08.170497 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:08.200928 1225677 cri.go:89] found id: ""
	I1217 01:34:08.200954 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.200964 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:08.200970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:08.201068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:08.237681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.237706 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.237711 1225677 cri.go:89] found id: ""
	I1217 01:34:08.237719 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:08.237794 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.241696 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.245486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:08.245561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:08.272543 1225677 cri.go:89] found id: ""
	I1217 01:34:08.272572 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.272582 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:08.272594 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:08.272676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:08.304603 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.304627 1225677 cri.go:89] found id: ""
	I1217 01:34:08.304635 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:08.304690 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.308617 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:08.308691 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:08.338781 1225677 cri.go:89] found id: ""
	I1217 01:34:08.338809 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.338818 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:08.338827 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:08.338839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:08.374627 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:08.374660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:08.472485 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:08.472523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:08.490991 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:08.491026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.574253 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:08.574292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.602049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:08.602118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:08.681328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:08.681348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:08.681361 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.708974 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:08.709000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.761284 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:08.761320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.819965 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:08.820006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.850377 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:08.850405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:11.432699 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:11.444142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:11.444218 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:11.477380 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.477404 1225677 cri.go:89] found id: ""
	I1217 01:34:11.477414 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:11.477475 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.481941 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:11.482014 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:11.510503 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.510529 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.510546 1225677 cri.go:89] found id: ""
	I1217 01:34:11.510554 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:11.510650 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.514842 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.518923 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:11.519013 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:11.546962 1225677 cri.go:89] found id: ""
	I1217 01:34:11.546990 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.547000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:11.547006 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:11.547080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:11.574757 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:11.574782 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:11.574787 1225677 cri.go:89] found id: ""
	I1217 01:34:11.574796 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:11.574877 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.579088 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.583273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:11.583402 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:11.613215 1225677 cri.go:89] found id: ""
	I1217 01:34:11.613244 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.613254 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:11.613261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:11.613326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:11.642127 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:11.642166 1225677 cri.go:89] found id: ""
	I1217 01:34:11.642175 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:11.642249 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.646180 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:11.646281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:11.676821 1225677 cri.go:89] found id: ""
	I1217 01:34:11.676848 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.676858 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:11.676868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:11.676880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:11.776881 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:11.776922 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:11.797665 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:11.797700 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:11.873871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:11.873895 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:11.873909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.901431 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:11.901461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.946983 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:11.947021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.993263 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:11.993299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:12.069104 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:12.069143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:12.101484 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:12.101511 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:12.137373 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:12.137404 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:12.219779 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:12.219833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:14.749747 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:14.760900 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:14.760971 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:14.789422 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:14.789504 1225677 cri.go:89] found id: ""
	I1217 01:34:14.789520 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:14.789579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.794016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:14.794094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:14.820779 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:14.820802 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:14.820808 1225677 cri.go:89] found id: ""
	I1217 01:34:14.820815 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:14.820892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.824759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.828502 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:14.828620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:14.855015 1225677 cri.go:89] found id: ""
	I1217 01:34:14.855042 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.855051 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:14.855058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:14.855118 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:14.882554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:14.882580 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:14.882586 1225677 cri.go:89] found id: ""
	I1217 01:34:14.882594 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:14.882649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.886723 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.890383 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:14.890487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:14.921014 1225677 cri.go:89] found id: ""
	I1217 01:34:14.921051 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.921077 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:14.921096 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:14.921186 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:14.950121 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:14.950151 1225677 cri.go:89] found id: ""
	I1217 01:34:14.950160 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:14.950235 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.954391 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:14.954491 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:14.981305 1225677 cri.go:89] found id: ""
	I1217 01:34:14.981381 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.981396 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:14.981406 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:14.981424 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:15.082515 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:15.082601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:15.115676 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:15.115766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:15.207150 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:15.207196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:15.253067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:15.253103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:15.282406 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:15.282434 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:15.332186 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:15.332232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:15.383617 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:15.383653 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:15.413724 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:15.413761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:15.512500 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:15.512539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:15.531712 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:15.531744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:15.607024 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.107382 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:18.125209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:18.125300 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:18.154715 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.154743 1225677 cri.go:89] found id: ""
	I1217 01:34:18.154759 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:18.154827 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.158989 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:18.159058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:18.186887 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.186906 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.186910 1225677 cri.go:89] found id: ""
	I1217 01:34:18.186918 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:18.186974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.191114 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.195016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:18.195088 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:18.230496 1225677 cri.go:89] found id: ""
	I1217 01:34:18.230522 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.230532 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:18.230541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:18.230603 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:18.257433 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.257453 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.257458 1225677 cri.go:89] found id: ""
	I1217 01:34:18.257466 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:18.257522 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.261223 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.264998 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:18.265077 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:18.298281 1225677 cri.go:89] found id: ""
	I1217 01:34:18.298359 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.298373 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:18.298381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:18.298438 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:18.326008 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:18.326029 1225677 cri.go:89] found id: ""
	I1217 01:34:18.326038 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:18.326094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.329952 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:18.330026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:18.355880 1225677 cri.go:89] found id: ""
	I1217 01:34:18.355914 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.355924 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:18.355956 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:18.355971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:18.430677 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:18.430716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:18.461146 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:18.461178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:18.483944 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:18.483976 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:18.558884 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.558914 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:18.558930 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.631593 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:18.631631 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.661399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:18.661431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:18.765933 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:18.765971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.798005 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:18.798035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.838207 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:18.838245 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.879939 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:18.879973 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.409362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:21.420285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:21.420355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:21.450399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:21.450424 1225677 cri.go:89] found id: ""
	I1217 01:34:21.450433 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:21.450488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.454541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:21.454613 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:21.484061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.484086 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:21.484091 1225677 cri.go:89] found id: ""
	I1217 01:34:21.484099 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:21.484156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.488024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.491648 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:21.491718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:21.522026 1225677 cri.go:89] found id: ""
	I1217 01:34:21.522052 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.522062 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:21.522071 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:21.522139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:21.554855 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.554887 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:21.554894 1225677 cri.go:89] found id: ""
	I1217 01:34:21.554902 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:21.554955 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.558520 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.562302 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:21.562407 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:21.590541 1225677 cri.go:89] found id: ""
	I1217 01:34:21.590564 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.590574 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:21.590580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:21.590636 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:21.626269 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.626340 1225677 cri.go:89] found id: ""
	I1217 01:34:21.626366 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:21.626428 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.630350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:21.630464 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:21.666471 1225677 cri.go:89] found id: ""
	I1217 01:34:21.666498 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.666507 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:21.666516 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:21.666533 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.706780 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:21.706815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.774693 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:21.774729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:21.861669 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:21.861713 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:21.977061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:21.977096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:22.003122 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:22.003171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:22.051916 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:22.051957 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:22.082713 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:22.082746 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:22.116010 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:22.116037 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:22.146809 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:22.146848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:22.228639 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:22.228703 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:22.228732 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.754744 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:24.765436 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:24.765518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:24.794628 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.794658 1225677 cri.go:89] found id: ""
	I1217 01:34:24.794667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:24.794732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.798378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:24.798454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:24.832756 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:24.832781 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:24.832787 1225677 cri.go:89] found id: ""
	I1217 01:34:24.832794 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:24.832850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.836854 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.840412 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:24.840572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:24.868168 1225677 cri.go:89] found id: ""
	I1217 01:34:24.868247 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.868270 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:24.868290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:24.868381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:24.899805 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:24.899825 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:24.899830 1225677 cri.go:89] found id: ""
	I1217 01:34:24.899838 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:24.899893 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.903464 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.906950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:24.907067 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:24.935718 1225677 cri.go:89] found id: ""
	I1217 01:34:24.935744 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.935753 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:24.935760 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:24.935818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:24.967779 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:24.967802 1225677 cri.go:89] found id: ""
	I1217 01:34:24.967811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:24.967863 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.971468 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:24.971534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:25.001724 1225677 cri.go:89] found id: ""
	I1217 01:34:25.001815 1225677 logs.go:282] 0 containers: []
	W1217 01:34:25.001842 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:25.001890 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:25.001925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:25.023512 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:25.023709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:25.051815 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:25.051848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:25.099451 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:25.099487 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:25.141801 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:25.141832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:25.178412 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:25.178444 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:25.285631 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:25.285667 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:25.362578 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:25.362602 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:25.362617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:25.403014 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:25.403050 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:25.510336 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:25.510395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:25.543551 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:25.543582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.129531 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:28.140763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:28.140832 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:28.184591 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.184616 1225677 cri.go:89] found id: ""
	I1217 01:34:28.184624 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:28.184707 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.188557 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:28.188634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:28.222629 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.222651 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.222656 1225677 cri.go:89] found id: ""
	I1217 01:34:28.222664 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:28.222724 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.226610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.230481 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:28.230575 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:28.257099 1225677 cri.go:89] found id: ""
	I1217 01:34:28.257126 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.257135 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:28.257142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:28.257220 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:28.291310 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:28.291347 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.291354 1225677 cri.go:89] found id: ""
	I1217 01:34:28.291388 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:28.291469 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.295342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.298970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:28.299075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:28.329122 1225677 cri.go:89] found id: ""
	I1217 01:34:28.329146 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.329155 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:28.329182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:28.329254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:28.359713 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.359736 1225677 cri.go:89] found id: ""
	I1217 01:34:28.359745 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:28.359803 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.363561 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:28.363633 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:28.397883 1225677 cri.go:89] found id: ""
	I1217 01:34:28.397910 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.397920 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:28.397929 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:28.397941 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.431945 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:28.431974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:28.482268 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:28.482300 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.509035 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:28.509067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.557586 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:28.557623 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.616155 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:28.616203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.647557 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:28.647590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.723102 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:28.723139 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:28.830255 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:28.830293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:28.849322 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:28.849355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:28.919883 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:28.919905 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:28.919926 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.492801 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:31.504000 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:31.504075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:31.539143 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.539163 1225677 cri.go:89] found id: ""
	I1217 01:34:31.539173 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:31.539228 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.543277 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:31.543355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:31.573251 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:31.573271 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:31.573275 1225677 cri.go:89] found id: ""
	I1217 01:34:31.573284 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:31.573337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.577458 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.581377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:31.581451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:31.612241 1225677 cri.go:89] found id: ""
	I1217 01:34:31.612270 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.612280 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:31.612286 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:31.612345 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:31.643539 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.643563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.643569 1225677 cri.go:89] found id: ""
	I1217 01:34:31.643578 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:31.643638 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.647841 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.651771 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:31.651855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:31.685384 1225677 cri.go:89] found id: ""
	I1217 01:34:31.685409 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.685418 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:31.685425 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:31.685487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:31.713458 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.713491 1225677 cri.go:89] found id: ""
	I1217 01:34:31.713501 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:31.713571 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.717510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:31.717598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:31.742954 1225677 cri.go:89] found id: ""
	I1217 01:34:31.742979 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.742989 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:31.742998 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:31.743030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:31.826689 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:31.826712 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:31.826726 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.858359 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:31.858389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.890466 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:31.890494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.920394 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:31.920516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:31.954114 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:31.954143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:32.048397 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:32.048463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:32.068978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:32.069014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:32.126891 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:32.126931 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:32.194493 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:32.194531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:32.278811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:32.278854 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:34.866004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:34.876932 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:34.877040 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:34.904525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:34.904548 1225677 cri.go:89] found id: ""
	I1217 01:34:34.904556 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:34.904634 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.908290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:34.908388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:34.937927 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:34.937962 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:34.937967 1225677 cri.go:89] found id: ""
	I1217 01:34:34.937975 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:34.938053 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.941844 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.945447 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:34.945529 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:34.974834 1225677 cri.go:89] found id: ""
	I1217 01:34:34.974860 1225677 logs.go:282] 0 containers: []
	W1217 01:34:34.974870 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:34.974876 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:34.974932 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:35.015100 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.015121 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.015126 1225677 cri.go:89] found id: ""
	I1217 01:34:35.015134 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:35.015196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.019378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.023124 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:35.023202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:35.055461 1225677 cri.go:89] found id: ""
	I1217 01:34:35.055488 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.055497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:35.055503 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:35.055561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:35.083009 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.083083 1225677 cri.go:89] found id: ""
	I1217 01:34:35.083107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:35.083195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.087719 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:35.087788 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:35.115588 1225677 cri.go:89] found id: ""
	I1217 01:34:35.115615 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.115625 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:35.115649 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:35.115664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:35.165942 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:35.165978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.194775 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:35.194803 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:35.291776 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:35.291811 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:35.338079 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:35.338110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:35.357793 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:35.357824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:35.428871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:35.428893 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:35.428905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.499513 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:35.499548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.540136 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:35.540211 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:35.636873 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:35.636913 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:35.665818 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:35.665889 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.220553 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:38.231749 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:38.231823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:38.259479 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.259500 1225677 cri.go:89] found id: ""
	I1217 01:34:38.259509 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:38.259568 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.263241 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:38.263385 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:38.295256 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.295292 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.295301 1225677 cri.go:89] found id: ""
	I1217 01:34:38.295310 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:38.295378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.300468 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.305174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:38.305294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:38.339161 1225677 cri.go:89] found id: ""
	I1217 01:34:38.339194 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.339204 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:38.339210 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:38.339275 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:38.367494 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.367518 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:38.367524 1225677 cri.go:89] found id: ""
	I1217 01:34:38.367531 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:38.367608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.371441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.375084 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:38.375191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:38.401755 1225677 cri.go:89] found id: ""
	I1217 01:34:38.401784 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.401795 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:38.401801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:38.401890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:38.429928 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.429962 1225677 cri.go:89] found id: ""
	I1217 01:34:38.429971 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:38.430044 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.433894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:38.433965 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:38.461088 1225677 cri.go:89] found id: ""
	I1217 01:34:38.461114 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.461124 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:38.461133 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:38.461144 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:38.544237 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:38.544274 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.574281 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:38.574312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.620093 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:38.620131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.674826 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:38.674902 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.752562 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:38.752603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.781494 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:38.781527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:38.833674 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:38.833706 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:38.933793 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:38.933832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:38.953733 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:38.953782 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:39.029298 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:39.029322 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:39.029336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.557003 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:41.568311 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:41.568412 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:41.601070 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:41.601089 1225677 cri.go:89] found id: ""
	I1217 01:34:41.601097 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:41.601156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.605150 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:41.605227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:41.633863 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:41.633887 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:41.633893 1225677 cri.go:89] found id: ""
	I1217 01:34:41.633901 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:41.633958 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.638555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.644087 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:41.644168 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:41.684237 1225677 cri.go:89] found id: ""
	I1217 01:34:41.684276 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.684287 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:41.684294 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:41.684371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:41.717925 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:41.717993 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.718016 1225677 cri.go:89] found id: ""
	I1217 01:34:41.718032 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:41.718109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.722478 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.726529 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:41.726607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:41.754525 1225677 cri.go:89] found id: ""
	I1217 01:34:41.754552 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.754562 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:41.754571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:41.754673 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:41.784794 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:41.784860 1225677 cri.go:89] found id: ""
	I1217 01:34:41.784883 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:41.784969 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.788882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:41.788980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:41.825117 1225677 cri.go:89] found id: ""
	I1217 01:34:41.825193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.825216 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:41.825233 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:41.825259 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:41.934154 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:41.934191 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:41.955231 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:41.955263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:42.023779 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:42.023819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:42.054183 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:42.054218 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:42.146898 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:42.147005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:42.249519 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:42.249543 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:42.249557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:42.280803 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:42.280833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:42.327682 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:42.327731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:42.373795 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:42.373832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:42.415409 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:42.415437 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:44.951197 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:44.962939 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:44.963016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:44.996268 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:44.996297 1225677 cri.go:89] found id: ""
	I1217 01:34:44.996306 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:44.996365 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.016281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:45.016367 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:45.152354 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.152375 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.152380 1225677 cri.go:89] found id: ""
	I1217 01:34:45.152389 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:45.152473 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.161519 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.169793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:45.169869 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:45.269649 1225677 cri.go:89] found id: ""
	I1217 01:34:45.269685 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.269696 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:45.269715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:45.269816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:45.322137 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.322210 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:45.322250 1225677 cri.go:89] found id: ""
	I1217 01:34:45.322320 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:45.322406 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.327229 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.331531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:45.331703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:45.362501 1225677 cri.go:89] found id: ""
	I1217 01:34:45.362571 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.362602 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:45.362624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:45.362696 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:45.394160 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.394240 1225677 cri.go:89] found id: ""
	I1217 01:34:45.394258 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:45.394335 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.398315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:45.398397 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:45.426737 1225677 cri.go:89] found id: ""
	I1217 01:34:45.426780 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.426790 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:45.426819 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:45.426839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:45.503383 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:45.503464 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:45.503485 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:45.535637 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:45.535672 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.583362 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:45.583398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.613182 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:45.613214 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:45.695579 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:45.695626 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:45.729534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:45.729563 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:45.826222 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:45.826262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:45.846157 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:45.846195 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.911389 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:45.911426 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.983046 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:45.983084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.519530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:48.530493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:48.530565 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:48.560366 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:48.560471 1225677 cri.go:89] found id: ""
	I1217 01:34:48.560496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:48.560585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.564848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:48.564920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:48.593560 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.593628 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:48.593666 1225677 cri.go:89] found id: ""
	I1217 01:34:48.593696 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:48.593783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.597895 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.601634 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:48.601718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:48.631022 1225677 cri.go:89] found id: ""
	I1217 01:34:48.631048 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.631057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:48.631064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:48.631122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:48.656804 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:48.656829 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.656834 1225677 cri.go:89] found id: ""
	I1217 01:34:48.656841 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:48.656898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.660979 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.664698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:48.664770 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:48.692344 1225677 cri.go:89] found id: ""
	I1217 01:34:48.692372 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.692383 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:48.692389 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:48.692481 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:48.721997 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:48.722020 1225677 cri.go:89] found id: ""
	I1217 01:34:48.722029 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:48.722111 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.726120 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:48.726247 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:48.753313 1225677 cri.go:89] found id: ""
	I1217 01:34:48.753339 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.753349 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:48.753358 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:48.753388 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:48.849435 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:48.849474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:48.870486 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:48.870523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:48.943874 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:48.943904 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:48.943919 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.991171 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:48.991205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:49.020622 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:49.020649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:49.064904 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:49.064942 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:49.143148 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:49.143186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:49.174999 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:49.175086 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:49.209127 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:49.209156 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:49.296275 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:49.296325 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:51.840412 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:51.851134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:51.851204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:51.880791 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:51.880811 1225677 cri.go:89] found id: ""
	I1217 01:34:51.880820 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:51.880879 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.884883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:51.884962 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:51.911511 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:51.911535 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:51.911541 1225677 cri.go:89] found id: ""
	I1217 01:34:51.911549 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:51.911607 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.915352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.918918 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:51.918986 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:51.950127 1225677 cri.go:89] found id: ""
	I1217 01:34:51.950152 1225677 logs.go:282] 0 containers: []
	W1217 01:34:51.950163 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:51.950169 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:51.950266 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:51.978696 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:51.978725 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:51.978731 1225677 cri.go:89] found id: ""
	I1217 01:34:51.978738 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:51.978795 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.982736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.986411 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:51.986482 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:52.016886 1225677 cri.go:89] found id: ""
	I1217 01:34:52.016911 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.016920 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:52.016926 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:52.016989 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:52.045870 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.045895 1225677 cri.go:89] found id: ""
	I1217 01:34:52.045904 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:52.045962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:52.049906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:52.049977 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:52.077565 1225677 cri.go:89] found id: ""
	I1217 01:34:52.077592 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.077604 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:52.077614 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:52.077646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:52.105176 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:52.105205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:52.211964 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:52.211999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:52.252350 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:52.252382 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:52.306053 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:52.306088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:52.376262 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:52.376302 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:52.403480 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:52.403508 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.431952 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:52.431983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:52.510953 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:52.510990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:52.555450 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:52.555482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:52.574086 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:52.574119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:52.644412 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.144646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:55.155615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:55.155693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:55.184697 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.184716 1225677 cri.go:89] found id: ""
	I1217 01:34:55.184724 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:55.184781 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.188462 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:55.188538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:55.217937 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.217961 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.217966 1225677 cri.go:89] found id: ""
	I1217 01:34:55.217974 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:55.218030 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.221924 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.226643 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:55.226714 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:55.254617 1225677 cri.go:89] found id: ""
	I1217 01:34:55.254645 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.254655 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:55.254662 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:55.254721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:55.282393 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.282419 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.282424 1225677 cri.go:89] found id: ""
	I1217 01:34:55.282432 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:55.282485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.286357 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.289912 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:55.289992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:55.316252 1225677 cri.go:89] found id: ""
	I1217 01:34:55.316278 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.316288 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:55.316295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:55.316368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:55.343249 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.343314 1225677 cri.go:89] found id: ""
	I1217 01:34:55.343337 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:55.343433 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.347319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:55.347448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:55.381545 1225677 cri.go:89] found id: ""
	I1217 01:34:55.381629 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.381645 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:55.381656 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:55.381669 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.421981 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:55.422014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.453301 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:55.453342 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.480646 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:55.480687 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:55.570826 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.570849 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:55.570863 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.599216 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:55.599257 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.658218 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:55.658310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.745919 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:55.745955 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:55.838064 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:55.838101 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:55.888374 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:55.888405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:55.996293 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:55.996331 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:58.522397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:58.536202 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:58.536271 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:58.566870 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:58.566965 1225677 cri.go:89] found id: ""
	I1217 01:34:58.566994 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:58.567139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.571283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:58.571363 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:58.598180 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:58.598208 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.598213 1225677 cri.go:89] found id: ""
	I1217 01:34:58.598222 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:58.598297 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.602201 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.605913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:58.605997 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:58.636167 1225677 cri.go:89] found id: ""
	I1217 01:34:58.636193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.636202 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:58.636209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:58.636270 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:58.662111 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:58.662135 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:58.662140 1225677 cri.go:89] found id: ""
	I1217 01:34:58.662148 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:58.662209 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.666315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.670253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:58.670348 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:58.696144 1225677 cri.go:89] found id: ""
	I1217 01:34:58.696219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.696244 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:58.696265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:58.696347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:58.726742 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.726767 1225677 cri.go:89] found id: ""
	I1217 01:34:58.726776 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:58.726832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.730710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:58.730785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:58.759394 1225677 cri.go:89] found id: ""
	I1217 01:34:58.759421 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.759431 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:58.759440 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:58.759454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.817531 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:58.817569 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.847360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:58.847389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:58.929741 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:58.929776 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:58.968951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:58.968982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:59.043218 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:59.043239 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:59.043255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:59.070405 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:59.070431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:59.146784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:59.146829 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:59.179445 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:59.179479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:59.286441 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:59.286479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:59.308412 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:59.308540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.850397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:01.863234 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:01.863368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:01.898442 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:01.898473 1225677 cri.go:89] found id: ""
	I1217 01:35:01.898484 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:01.898577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.903064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:01.903142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:01.936524 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.936547 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:01.936551 1225677 cri.go:89] found id: ""
	I1217 01:35:01.936559 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:01.936625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.942865 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.947963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:01.948071 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:01.979359 1225677 cri.go:89] found id: ""
	I1217 01:35:01.979384 1225677 logs.go:282] 0 containers: []
	W1217 01:35:01.979393 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:01.979399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:01.979466 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:02.012882 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.012925 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.012931 1225677 cri.go:89] found id: ""
	I1217 01:35:02.012975 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:02.013055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.017605 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.021797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:02.021870 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:02.049550 1225677 cri.go:89] found id: ""
	I1217 01:35:02.049621 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.049638 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:02.049646 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:02.049722 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:02.081301 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.081326 1225677 cri.go:89] found id: ""
	I1217 01:35:02.081335 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:02.081392 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.086118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:02.086210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:02.125352 1225677 cri.go:89] found id: ""
	I1217 01:35:02.125374 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.125383 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:02.125393 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:02.125405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:02.197255 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:02.197318 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:02.197355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:02.226446 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:02.226488 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:02.271257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:02.271293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:02.314955 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:02.314988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.386430 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:02.386468 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.417607 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:02.417682 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.449011 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:02.449041 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:02.551859 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:02.551899 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:02.571928 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:02.571960 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:02.659356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:02.659395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:05.190765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:05.203695 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:05.203771 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:05.238686 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.238707 1225677 cri.go:89] found id: ""
	I1217 01:35:05.238716 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:05.238778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.242613 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:05.242687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:05.272627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.272661 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.272667 1225677 cri.go:89] found id: ""
	I1217 01:35:05.272675 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:05.272757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.277184 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.281337 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:05.281414 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:05.309340 1225677 cri.go:89] found id: ""
	I1217 01:35:05.309361 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.309370 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:05.309377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:05.309437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:05.342268 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.342294 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.342300 1225677 cri.go:89] found id: ""
	I1217 01:35:05.342308 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:05.342394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.346668 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.350724 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:05.350805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:05.378257 1225677 cri.go:89] found id: ""
	I1217 01:35:05.378289 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.378298 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:05.378305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:05.378366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:05.406348 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.406370 1225677 cri.go:89] found id: ""
	I1217 01:35:05.406379 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:05.406455 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.410653 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:05.410724 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:05.441777 1225677 cri.go:89] found id: ""
	I1217 01:35:05.441802 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.441812 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:05.441820 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:05.441832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:05.521081 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:05.521113 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:05.521127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.559491 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:05.559525 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.608690 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:05.608727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.640635 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:05.640666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:05.720771 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:05.720808 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:05.824388 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:05.824427 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.864839 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:05.864871 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.960476 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:05.960520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.992555 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:05.992588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:06.045891 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:06.045925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:08.568611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:08.579598 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:08.579681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:08.607399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.607421 1225677 cri.go:89] found id: ""
	I1217 01:35:08.607430 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:08.607485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.611906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:08.611982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:08.638447 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.638470 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:08.638476 1225677 cri.go:89] found id: ""
	I1217 01:35:08.638484 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:08.638558 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.642337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.646066 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:08.646162 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:08.673000 1225677 cri.go:89] found id: ""
	I1217 01:35:08.673026 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.673036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:08.673042 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:08.673135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:08.701768 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:08.701792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:08.701798 1225677 cri.go:89] found id: ""
	I1217 01:35:08.701806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:08.701892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.705733 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.709545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:08.709620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:08.736283 1225677 cri.go:89] found id: ""
	I1217 01:35:08.736309 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.736319 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:08.736325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:08.736383 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:08.763589 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:08.763610 1225677 cri.go:89] found id: ""
	I1217 01:35:08.763618 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:08.763679 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.768008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:08.768157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:08.794921 1225677 cri.go:89] found id: ""
	I1217 01:35:08.794948 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.794957 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:08.794967 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:08.795003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:08.866335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:08.866356 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:08.866371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.894862 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:08.894894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.945712 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:08.945749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:09.030175 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:09.030213 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:09.057626 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:09.057656 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:09.140070 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:09.140109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:09.249646 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:09.249685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:09.269874 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:09.269906 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:09.317090 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:09.317126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:09.346482 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:09.346513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:11.877651 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:11.889575 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:11.889645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:11.917211 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:11.917234 1225677 cri.go:89] found id: ""
	I1217 01:35:11.917243 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:11.917309 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.921144 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:11.921223 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:11.955516 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:11.955536 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:11.955541 1225677 cri.go:89] found id: ""
	I1217 01:35:11.955548 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:11.955604 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.959308 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.962862 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:11.962933 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:11.991261 1225677 cri.go:89] found id: ""
	I1217 01:35:11.991284 1225677 logs.go:282] 0 containers: []
	W1217 01:35:11.991293 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:11.991299 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:11.991366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:12.023452 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.023477 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.023483 1225677 cri.go:89] found id: ""
	I1217 01:35:12.023491 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:12.023581 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.027715 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.031641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:12.031751 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:12.059135 1225677 cri.go:89] found id: ""
	I1217 01:35:12.059211 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.059234 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:12.059255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:12.059343 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:12.092809 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.092830 1225677 cri.go:89] found id: ""
	I1217 01:35:12.092839 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:12.092915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.096814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:12.096963 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:12.132911 1225677 cri.go:89] found id: ""
	I1217 01:35:12.132936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.132946 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:12.132955 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:12.132966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:12.235310 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:12.235346 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:12.255554 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:12.255587 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:12.303522 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:12.303560 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.374998 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:12.375032 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:12.461333 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:12.461371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:12.547450 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:12.547475 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:12.547489 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:12.574864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:12.574892 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:12.619775 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:12.619816 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.649040 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:12.649123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.677296 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:12.677326 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.212228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:15.225138 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:15.225215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:15.259192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.259218 1225677 cri.go:89] found id: ""
	I1217 01:35:15.259228 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:15.259287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.263205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:15.263279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:15.290493 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.290516 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.290521 1225677 cri.go:89] found id: ""
	I1217 01:35:15.290529 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:15.290588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.294490 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.298107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:15.298208 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:15.325021 1225677 cri.go:89] found id: ""
	I1217 01:35:15.325047 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.325057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:15.325063 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:15.325125 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:15.353712 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.353744 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.353750 1225677 cri.go:89] found id: ""
	I1217 01:35:15.353758 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:15.353828 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.357883 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.361729 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:15.361817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:15.389342 1225677 cri.go:89] found id: ""
	I1217 01:35:15.389370 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.389379 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:15.389386 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:15.389449 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:15.418437 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.418470 1225677 cri.go:89] found id: ""
	I1217 01:35:15.418479 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:15.418553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.422466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:15.422548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:15.449297 1225677 cri.go:89] found id: ""
	I1217 01:35:15.449333 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.449343 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:15.449370 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:15.449394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:15.468355 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:15.468385 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.494969 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:15.495005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.543170 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:15.543209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:15.616803 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:15.616829 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:15.616845 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.659996 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:15.660031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.730995 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:15.731034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.758963 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:15.758994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.785562 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:15.785633 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:15.872457 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:15.872494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.904808 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:15.904838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:18.506161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:18.518520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:18.518589 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:18.550949 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.550972 1225677 cri.go:89] found id: ""
	I1217 01:35:18.550982 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:18.551041 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.554800 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:18.554880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:18.582497 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:18.582522 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.582527 1225677 cri.go:89] found id: ""
	I1217 01:35:18.582535 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:18.582594 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.586831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.590486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:18.590560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:18.617401 1225677 cri.go:89] found id: ""
	I1217 01:35:18.617426 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.617436 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:18.617443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:18.617504 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:18.648400 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:18.648458 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.648464 1225677 cri.go:89] found id: ""
	I1217 01:35:18.648472 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:18.648530 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.652380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.655820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:18.655916 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:18.689519 1225677 cri.go:89] found id: ""
	I1217 01:35:18.689544 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.689553 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:18.689560 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:18.689621 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:18.718284 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:18.718306 1225677 cri.go:89] found id: ""
	I1217 01:35:18.718313 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:18.718368 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.722268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:18.722372 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:18.753514 1225677 cri.go:89] found id: ""
	I1217 01:35:18.753542 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.753558 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:18.753567 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:18.753611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:18.771813 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:18.771842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:18.845441 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:18.845463 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:18.845477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.872553 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:18.872582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.922099 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:18.922176 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.950258 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:18.950285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:18.990211 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:18.990241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:19.031127 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:19.031164 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:19.107071 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:19.107109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:19.138299 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:19.138327 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:19.222624 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:19.222660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:21.834640 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:21.845711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:21.845784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:21.895249 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:21.895280 1225677 cri.go:89] found id: ""
	I1217 01:35:21.895292 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:21.895371 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.902322 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:21.902404 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:21.943815 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:21.943857 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:21.943863 1225677 cri.go:89] found id: ""
	I1217 01:35:21.943877 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:21.943963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.949206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.954547 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:21.954640 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:21.988594 1225677 cri.go:89] found id: ""
	I1217 01:35:21.988620 1225677 logs.go:282] 0 containers: []
	W1217 01:35:21.988630 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:21.988636 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:21.988718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:22.024625 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.024646 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.024651 1225677 cri.go:89] found id: ""
	I1217 01:35:22.024660 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:22.024760 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.029143 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.033935 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:22.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:22.067922 1225677 cri.go:89] found id: ""
	I1217 01:35:22.067946 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.067955 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:22.067961 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:22.068020 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:22.097619 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.097641 1225677 cri.go:89] found id: ""
	I1217 01:35:22.097649 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:22.097706 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.101692 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:22.101766 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:22.136868 1225677 cri.go:89] found id: ""
	I1217 01:35:22.136891 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.136900 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:22.136911 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:22.136923 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:22.164209 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:22.164236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:22.208399 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:22.208512 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:22.256618 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:22.256650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.287201 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:22.287237 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.314443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:22.314472 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:22.346752 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:22.346780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:22.445530 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:22.445567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:22.464378 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:22.464409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.554715 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:22.554749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:22.659061 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:22.659103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:22.731143 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.231455 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:25.242812 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:25.242949 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:25.280443 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.280470 1225677 cri.go:89] found id: ""
	I1217 01:35:25.280478 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:25.280536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.284885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:25.285008 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:25.313823 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.313846 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.313852 1225677 cri.go:89] found id: ""
	I1217 01:35:25.313859 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:25.313939 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.317952 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.321539 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:25.321620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:25.354565 1225677 cri.go:89] found id: ""
	I1217 01:35:25.354632 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.354656 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:25.354681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:25.354777 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:25.386743 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.386774 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.386779 1225677 cri.go:89] found id: ""
	I1217 01:35:25.386787 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:25.386857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.390671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.394226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:25.394339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:25.421123 1225677 cri.go:89] found id: ""
	I1217 01:35:25.421212 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.421228 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:25.421236 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:25.421310 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:25.448879 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.448904 1225677 cri.go:89] found id: ""
	I1217 01:35:25.448913 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:25.448971 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.452707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:25.452782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:25.479351 1225677 cri.go:89] found id: ""
	I1217 01:35:25.479379 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.479389 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:25.479399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:25.479410 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:25.577317 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:25.577354 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:25.600156 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:25.600203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:25.679524 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.679600 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:25.679621 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.706792 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:25.706824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.764895 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:25.764934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.796158 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:25.796188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.823684 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:25.823721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:25.857273 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:25.857303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.915963 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:25.916003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.992485 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:25.992520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:28.577965 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:28.588733 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:28.588802 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:28.621192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.621211 1225677 cri.go:89] found id: ""
	I1217 01:35:28.621220 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:28.621279 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.625055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:28.625124 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:28.651718 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:28.651738 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.651742 1225677 cri.go:89] found id: ""
	I1217 01:35:28.651749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:28.651807 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.656353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.660550 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:28.660620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:28.688556 1225677 cri.go:89] found id: ""
	I1217 01:35:28.688580 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.688589 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:28.688596 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:28.688654 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:28.716478 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:28.716503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:28.716508 1225677 cri.go:89] found id: ""
	I1217 01:35:28.716516 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:28.716603 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.720442 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.723785 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:28.723862 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:28.750780 1225677 cri.go:89] found id: ""
	I1217 01:35:28.750807 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.750817 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:28.750823 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:28.750882 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:28.777746 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:28.777772 1225677 cri.go:89] found id: ""
	I1217 01:35:28.777781 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:28.777836 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.781586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:28.781707 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:28.812032 1225677 cri.go:89] found id: ""
	I1217 01:35:28.812062 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.812072 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:28.812081 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:28.812115 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:28.910028 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:28.910067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.938533 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:28.938565 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.982530 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:28.982566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:29.059912 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:29.059948 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:29.087417 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:29.087449 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:29.141591 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:29.141622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:29.162662 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:29.162694 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:29.245511 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:29.245537 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:29.245553 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:29.286747 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:29.286784 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:29.317045 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:29.317075 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:31.896935 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:31.908531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:31.908605 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:31.951663 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:31.951684 1225677 cri.go:89] found id: ""
	I1217 01:35:31.951692 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:31.951746 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.956325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:31.956501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:31.990512 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:31.990578 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:31.990598 1225677 cri.go:89] found id: ""
	I1217 01:35:31.990625 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:31.990708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.994957 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.001450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:32.001597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:32.033107 1225677 cri.go:89] found id: ""
	I1217 01:35:32.033136 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.033146 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:32.033153 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:32.033245 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:32.061118 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.061140 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.061145 1225677 cri.go:89] found id: ""
	I1217 01:35:32.061153 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:32.061208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.065195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.068963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:32.069066 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:32.099914 1225677 cri.go:89] found id: ""
	I1217 01:35:32.099941 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.099951 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:32.099957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:32.100018 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:32.134003 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.134028 1225677 cri.go:89] found id: ""
	I1217 01:35:32.134044 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:32.134101 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.138837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:32.138909 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:32.178095 1225677 cri.go:89] found id: ""
	I1217 01:35:32.178168 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.178193 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:32.178210 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:32.178223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:32.219018 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:32.219049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:32.328076 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:32.328182 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:32.347854 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:32.347887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:32.389069 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:32.389143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.464016 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:32.464052 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.492348 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:32.492466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.519965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:32.520035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:32.589420 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:32.589485 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:32.589506 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:32.615780 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:32.615814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:32.668491 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:32.668527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.253556 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:35.266266 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:35.266344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:35.303632 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.303658 1225677 cri.go:89] found id: ""
	I1217 01:35:35.303667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:35.303726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.307439 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:35.307511 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:35.336107 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.336131 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.336136 1225677 cri.go:89] found id: ""
	I1217 01:35:35.336143 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:35.336196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.340106 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.343587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:35.343667 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:35.374453 1225677 cri.go:89] found id: ""
	I1217 01:35:35.374483 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.374492 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:35.374498 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:35.374560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:35.401769 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.401792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.401798 1225677 cri.go:89] found id: ""
	I1217 01:35:35.401806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:35.401860 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.405507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.409182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:35.409254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:35.437191 1225677 cri.go:89] found id: ""
	I1217 01:35:35.437229 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.437280 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:35.437303 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:35.437454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:35.464026 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.464048 1225677 cri.go:89] found id: ""
	I1217 01:35:35.464056 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:35.464113 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.467752 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:35.467854 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:35.495119 1225677 cri.go:89] found id: ""
	I1217 01:35:35.495143 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.495152 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:35.495161 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:35.495173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.538118 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:35.538157 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.612361 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:35.612398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.642424 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:35.642454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.671140 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:35.671168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.753840 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:35.753879 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:35.791176 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:35.791207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:35.861567 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:35.861588 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:35.861604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.887544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:35.887573 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.930868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:35.930901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:36.035955 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:36.035997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.556940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:38.568341 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:38.568410 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:38.602139 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:38.602163 1225677 cri.go:89] found id: ""
	I1217 01:35:38.602172 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:38.602234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.606168 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:38.606244 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:38.636762 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:38.636782 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:38.636787 1225677 cri.go:89] found id: ""
	I1217 01:35:38.636795 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:38.636849 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.640703 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.644870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:38.644980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:38.672028 1225677 cri.go:89] found id: ""
	I1217 01:35:38.672105 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.672130 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:38.672152 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:38.672252 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:38.702063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:38.702088 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:38.702096 1225677 cri.go:89] found id: ""
	I1217 01:35:38.702104 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:38.702189 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.706075 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.710843 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:38.710923 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:38.739176 1225677 cri.go:89] found id: ""
	I1217 01:35:38.739204 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.739214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:38.739221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:38.739281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:38.765721 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:38.765749 1225677 cri.go:89] found id: ""
	I1217 01:35:38.765759 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:38.765835 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.769950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:38.770026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:38.797985 1225677 cri.go:89] found id: ""
	I1217 01:35:38.798013 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.798023 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:38.798033 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:38.798065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:38.898407 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:38.898448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.917886 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:38.917920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:38.999335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:38.999368 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:38.999384 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:39.041692 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:39.041729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:39.089675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:39.089712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:39.172952 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:39.172988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:39.211704 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:39.211736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:39.241891 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:39.241920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:39.276958 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:39.276988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:39.364067 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:39.364119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:41.897002 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:41.908024 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:41.908100 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:41.937482 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:41.937556 1225677 cri.go:89] found id: ""
	I1217 01:35:41.937569 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:41.937630 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.941542 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:41.941611 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:41.987116 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:41.987139 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:41.987145 1225677 cri.go:89] found id: ""
	I1217 01:35:41.987153 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:41.987206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.991091 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.994831 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:41.994905 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:42.033990 1225677 cri.go:89] found id: ""
	I1217 01:35:42.034016 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.034025 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:42.034031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:42.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:42.065878 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:42.065959 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.065980 1225677 cri.go:89] found id: ""
	I1217 01:35:42.066005 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:42.066122 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.071367 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.076378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:42.076531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:42.123414 1225677 cri.go:89] found id: ""
	I1217 01:35:42.123521 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.123583 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:42.123610 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:42.123706 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:42.163210 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.163302 1225677 cri.go:89] found id: ""
	I1217 01:35:42.163328 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:42.163431 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.168650 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:42.168758 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:42.211741 1225677 cri.go:89] found id: ""
	I1217 01:35:42.211767 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.211777 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:42.211787 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:42.211800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:42.252091 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:42.252126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:42.356409 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:42.356465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:42.377129 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:42.377163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:42.449855 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:42.449879 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:42.449893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:42.476498 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:42.476530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:42.518303 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:42.518337 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.548819 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:42.548852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.578811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:42.578840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:42.658356 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:42.658395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:42.700126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:42.700173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.276979 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:45.301570 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:45.301737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:45.339316 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:45.339342 1225677 cri.go:89] found id: ""
	I1217 01:35:45.339351 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:45.339441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.343543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:45.343652 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:45.374479 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.374552 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.374574 1225677 cri.go:89] found id: ""
	I1217 01:35:45.374600 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:45.374672 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.378901 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.382870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:45.382942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:45.413785 1225677 cri.go:89] found id: ""
	I1217 01:35:45.413816 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.413825 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:45.413832 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:45.413894 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:45.446395 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.446417 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.446423 1225677 cri.go:89] found id: ""
	I1217 01:35:45.446431 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:45.446508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.450414 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.454372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:45.454448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:45.483846 1225677 cri.go:89] found id: ""
	I1217 01:35:45.483918 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.483942 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:45.483963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:45.484039 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:45.515890 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.515962 1225677 cri.go:89] found id: ""
	I1217 01:35:45.515986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:45.516060 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.519980 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:45.520107 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:45.548900 1225677 cri.go:89] found id: ""
	I1217 01:35:45.548984 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.549001 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:45.549011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:45.549023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.594641 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:45.594680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.623072 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:45.623171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:45.701558 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:45.701599 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:45.775358 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:45.775423 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:45.775443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.822675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:45.822712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.904212 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:45.904249 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.934553 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:45.934581 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:45.966200 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:45.966231 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:46.073612 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:46.073651 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:46.092826 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:46.092860 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.626362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:48.637081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:48.637157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:48.663951 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.664018 1225677 cri.go:89] found id: ""
	I1217 01:35:48.664045 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:48.664137 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.667889 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:48.668007 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:48.695424 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:48.695498 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:48.695518 1225677 cri.go:89] found id: ""
	I1217 01:35:48.695570 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:48.695667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.699980 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.703779 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:48.703875 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:48.731347 1225677 cri.go:89] found id: ""
	I1217 01:35:48.731372 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.731381 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:48.731388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:48.731448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:48.761776 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:48.761802 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:48.761808 1225677 cri.go:89] found id: ""
	I1217 01:35:48.761816 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:48.761875 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.766072 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.769796 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:48.769871 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:48.799377 1225677 cri.go:89] found id: ""
	I1217 01:35:48.799404 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.799412 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:48.799418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:48.799477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:48.828149 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:48.828173 1225677 cri.go:89] found id: ""
	I1217 01:35:48.828192 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:48.828254 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.832599 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:48.832717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:48.858554 1225677 cri.go:89] found id: ""
	I1217 01:35:48.858587 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.858597 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:48.858626 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:48.858643 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:48.894472 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:48.894502 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:48.969952 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:48.969978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:48.969994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:49.014023 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:49.014058 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:49.092630 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:49.092671 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:49.197053 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:49.197088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:49.225929 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:49.225963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:49.253145 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:49.253174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:49.301391 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:49.301428 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:49.337786 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:49.337819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:49.367000 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:49.367029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:51.942903 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:51.957586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:51.957662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:52.007996 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.008017 1225677 cri.go:89] found id: ""
	I1217 01:35:52.008026 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:52.008082 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.015080 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:52.015148 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:52.052213 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.052249 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.052255 1225677 cri.go:89] found id: ""
	I1217 01:35:52.052262 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:52.052318 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.056182 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.059959 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:52.060033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:52.090239 1225677 cri.go:89] found id: ""
	I1217 01:35:52.090264 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.090274 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:52.090281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:52.090341 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:52.118854 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:52.118874 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.118879 1225677 cri.go:89] found id: ""
	I1217 01:35:52.118886 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:52.118946 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.125093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.128837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:52.128931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:52.157907 1225677 cri.go:89] found id: ""
	I1217 01:35:52.157936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.157945 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:52.157957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:52.158017 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:52.191428 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.191451 1225677 cri.go:89] found id: ""
	I1217 01:35:52.191459 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:52.191543 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.195375 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:52.195456 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:52.224407 1225677 cri.go:89] found id: ""
	I1217 01:35:52.224468 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.224477 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:52.224486 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:52.224498 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.252950 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:52.252981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.279228 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:52.279258 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:52.298974 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:52.299007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:52.370510 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:52.370544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:52.370588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.418893 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:52.418934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:52.499956 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:52.499992 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:52.542158 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:52.542187 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:52.643325 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:52.643367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.671238 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:52.671267 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.712214 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:52.712252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.294635 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:55.305795 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:55.305897 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:55.341120 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.341143 1225677 cri.go:89] found id: ""
	I1217 01:35:55.341152 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:55.341208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.345154 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:55.345236 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:55.376865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.376937 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.376959 1225677 cri.go:89] found id: ""
	I1217 01:35:55.376982 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:55.377065 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.381380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.385355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:55.385472 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:55.412679 1225677 cri.go:89] found id: ""
	I1217 01:35:55.412701 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.412710 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:55.412716 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:55.412773 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:55.439554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.439573 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.439578 1225677 cri.go:89] found id: ""
	I1217 01:35:55.439585 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:55.439639 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.443337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.446737 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:55.446804 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:55.478015 1225677 cri.go:89] found id: ""
	I1217 01:35:55.478039 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.478052 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:55.478065 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:55.478136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:55.503877 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:55.503940 1225677 cri.go:89] found id: ""
	I1217 01:35:55.503964 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:55.504038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.507809 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:55.507880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:55.539899 1225677 cri.go:89] found id: ""
	I1217 01:35:55.539926 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.539935 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:55.539951 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:55.539963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:55.642073 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:55.642111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:55.662102 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:55.662143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.689162 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:55.689192 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.728771 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:55.728804 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.755851 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:55.755878 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:55.839759 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:55.839805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:55.910162 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:55.910183 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:55.910197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.962626 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:55.962664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:56.057075 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:56.057126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:56.095037 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:56.095069 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:58.632280 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:58.643092 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:58.643199 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:58.670245 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:58.670268 1225677 cri.go:89] found id: ""
	I1217 01:35:58.670277 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:58.670332 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.673988 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:58.674059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:58.706113 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:58.706135 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:58.706140 1225677 cri.go:89] found id: ""
	I1217 01:35:58.706148 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:58.706234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.710732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.714631 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:58.714747 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:58.742956 1225677 cri.go:89] found id: ""
	I1217 01:35:58.742982 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.742991 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:58.742997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:58.743058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:58.774022 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:58.774044 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:58.774050 1225677 cri.go:89] found id: ""
	I1217 01:35:58.774058 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:58.774112 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.778073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.781607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:58.781686 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:58.808679 1225677 cri.go:89] found id: ""
	I1217 01:35:58.808703 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.808719 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:58.808725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:58.808785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:58.835922 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:58.835942 1225677 cri.go:89] found id: ""
	I1217 01:35:58.835951 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:58.836007 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.839615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:58.839689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:58.866788 1225677 cri.go:89] found id: ""
	I1217 01:35:58.866813 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.866823 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:58.866833 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:58.866866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:58.968702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:58.968738 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:58.989939 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:58.989967 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:59.058020 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:59.058046 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:59.058059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:59.088364 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:59.088394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:59.141100 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:59.141135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:59.232851 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:59.232891 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:59.262771 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:59.262800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:59.290187 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:59.290224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:59.339890 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:59.339924 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:59.422198 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:59.422236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:01.956538 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:01.967590 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:01.967660 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:02.007538 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.007575 1225677 cri.go:89] found id: ""
	I1217 01:36:02.007584 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:02.007670 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.012001 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:02.012136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:02.046710 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.046735 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.046741 1225677 cri.go:89] found id: ""
	I1217 01:36:02.046749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:02.046804 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.050667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.054450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:02.054546 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:02.081851 1225677 cri.go:89] found id: ""
	I1217 01:36:02.081880 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.081890 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:02.081897 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:02.081980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:02.112077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.112101 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.112106 1225677 cri.go:89] found id: ""
	I1217 01:36:02.112114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:02.112169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.116263 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.121396 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:02.121492 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:02.152376 1225677 cri.go:89] found id: ""
	I1217 01:36:02.152404 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.152497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:02.152523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:02.152642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:02.187133 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.187159 1225677 cri.go:89] found id: ""
	I1217 01:36:02.187168 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:02.187247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.191078 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:02.191173 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:02.220566 1225677 cri.go:89] found id: ""
	I1217 01:36:02.220593 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.220602 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:02.220611 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:02.220659 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.253992 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:02.254021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.304043 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:02.304077 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.350981 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:02.351020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.431358 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:02.431393 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.458269 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:02.458298 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:02.561780 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:02.561820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:02.582487 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:02.582522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:02.663558 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:02.663583 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:02.663596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.700536 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:02.700568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:02.775505 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:02.775547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.310734 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:05.322909 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:05.322985 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:05.350653 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.350738 1225677 cri.go:89] found id: ""
	I1217 01:36:05.350762 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:05.350819 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.355346 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:05.355461 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:05.385411 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:05.385439 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.385445 1225677 cri.go:89] found id: ""
	I1217 01:36:05.385453 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:05.385511 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.389761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.393387 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:05.393463 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:05.420412 1225677 cri.go:89] found id: ""
	I1217 01:36:05.420495 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.420505 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:05.420511 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:05.420569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:05.452034 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:05.452060 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.452066 1225677 cri.go:89] found id: ""
	I1217 01:36:05.452075 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:05.452131 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.456205 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.460128 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:05.460221 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:05.486956 1225677 cri.go:89] found id: ""
	I1217 01:36:05.486986 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.486995 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:05.487002 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:05.487063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:05.518138 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.518160 1225677 cri.go:89] found id: ""
	I1217 01:36:05.518169 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:05.518227 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.522038 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:05.522112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:05.552883 1225677 cri.go:89] found id: ""
	I1217 01:36:05.552951 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.552969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:05.552980 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:05.552994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.580975 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:05.581006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:05.677135 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:05.677178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:05.697133 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:05.697163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.725150 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:05.725181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.768358 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:05.768396 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.794846 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:05.794876 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:05.871841 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:05.871921 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.905951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:05.905982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:05.976460 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:05.976482 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:05.976495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:06.030179 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:06.030260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.614353 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:08.625446 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:08.625527 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:08.652272 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.652300 1225677 cri.go:89] found id: ""
	I1217 01:36:08.652309 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:08.652372 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.656164 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:08.656237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:08.682167 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.682186 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:08.682190 1225677 cri.go:89] found id: ""
	I1217 01:36:08.682198 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:08.682258 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.686632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.690338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:08.690409 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:08.717708 1225677 cri.go:89] found id: ""
	I1217 01:36:08.717732 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.717741 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:08.717748 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:08.717805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:08.754193 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.754217 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:08.754222 1225677 cri.go:89] found id: ""
	I1217 01:36:08.754229 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:08.754285 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.758295 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.761917 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:08.762011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:08.793723 1225677 cri.go:89] found id: ""
	I1217 01:36:08.793750 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.793761 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:08.793774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:08.793833 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:08.820995 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:08.821018 1225677 cri.go:89] found id: ""
	I1217 01:36:08.821027 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:08.821109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.824969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:08.825043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:08.850861 1225677 cri.go:89] found id: ""
	I1217 01:36:08.850896 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.850906 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:08.850917 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:08.850929 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:08.927540 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:08.927562 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:08.927576 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.953082 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:08.953110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.994744 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:08.994781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:09.027277 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:09.027305 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:09.056339 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:09.056367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:09.129785 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:09.129820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:09.161526 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:09.161607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:09.261869 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:09.261908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:09.282618 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:09.282652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:09.328912 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:09.328949 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:11.909228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:11.920145 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:11.920215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:11.953558 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:11.953581 1225677 cri.go:89] found id: ""
	I1217 01:36:11.953589 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:11.953643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.957221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:11.957293 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:11.984240 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:11.984263 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:11.984268 1225677 cri.go:89] found id: ""
	I1217 01:36:11.984276 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:11.984336 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.987996 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.991849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:11.991924 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:12.022066 1225677 cri.go:89] found id: ""
	I1217 01:36:12.022096 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.022106 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:12.022113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:12.022174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:12.058540 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.058563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.058569 1225677 cri.go:89] found id: ""
	I1217 01:36:12.058577 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:12.058629 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.063379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.067419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:12.067548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:12.095872 1225677 cri.go:89] found id: ""
	I1217 01:36:12.095900 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.095922 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:12.095929 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:12.095998 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:12.134836 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.134910 1225677 cri.go:89] found id: ""
	I1217 01:36:12.134933 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:12.135022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.139454 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:12.139524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:12.178455 1225677 cri.go:89] found id: ""
	I1217 01:36:12.178481 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.178491 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:12.178500 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:12.178538 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.215176 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:12.215204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:12.304978 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:12.305015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:12.342716 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:12.342745 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:12.444908 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:12.444945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:12.463288 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:12.463316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:12.536568 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:12.536589 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:12.536603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:12.576446 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:12.576479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.652969 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:12.653004 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.684862 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:12.684893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:12.713785 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:12.713815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:15.267669 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:15.282407 1225677 out.go:203] 
	W1217 01:36:15.285472 1225677 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 01:36:15.285518 1225677 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 01:36:15.285531 1225677 out.go:285] * Related issues:
	W1217 01:36:15.285545 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 01:36:15.285561 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 01:36:15.288521 1225677 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.00263192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.018401147Z" level=info msg="Created container 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62: kube-system/storage-provisioner/storage-provisioner" id=1949dc31-1f1c-4b50-a2e1-37b3fdbf1dae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.019096564Z" level=info msg="Starting container: 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62" id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.02762405Z" level=info msg="Started container" PID=1465 containerID=69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62 description=kube-system/storage-provisioner/storage-provisioner id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer sandboxID=201ec2eb9e7bac96947c26eb05eaeb60a6c9cb562fc7abd5b112bcffc3034df6
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.942366958Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946089951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.9461257Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946150479Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949691184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.94972877Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949750136Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953024484Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953060389Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953083707Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956843738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956882473Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.984628463Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=d06134a9-f254-4735-8afd-66ee773b0add name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.986619446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=64030ed7-d453-4dae-a62d-31943ce0a699 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988074458Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988182542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.010661643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.011529823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.034308469Z" level=info msg="Created container bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.036802709Z" level=info msg="Starting container: bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee" id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.042056225Z" level=info msg="Started container" PID=1514 containerID=bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee description=kube-system/kube-controller-manager-ha-202151/kube-controller-manager id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5021c181f938b38114a133bf254586f8ff5e1e22eea40c87bb44019760307250
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	bbbccca1f1945       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   5 minutes ago       Running             kube-controller-manager   7                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	69c29e5195bd5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       7                   201ec2eb9e7ba       storage-provisioner                 kube-system
	3345ee69cef2f       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Exited              kube-controller-manager   6                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	e2674511b7c44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       6                   201ec2eb9e7ba       storage-provisioner                 kube-system
	5b41f976d94aa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   7991c76c60a45       coredns-66bc5c9577-km6lq            kube-system
	f78b81e996c76       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   b40c6af808cd2       busybox-7b57f96db7-hw4rm            default
	4f3ffacfcf52c       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   7 minutes ago       Running             kube-proxy                2                   db6cac339dafd       kube-proxy-5gdc5                    kube-system
	cc242e356e74c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   416ecd7d82605       coredns-66bc5c9577-4s6qf            kube-system
	421b902e0a04a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   0059b57d997fb       kindnet-7b5wx                       kube-system
	9deff052e5328       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Running             etcd                      2                   cdd6d86a58561       etcd-ha-202151                      kube-system
	b08781420f13d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   7 minutes ago       Running             kube-apiserver            3                   55c73e3aeca0b       kube-apiserver-ha-202151            kube-system
	d2d094f7ce12d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            2                   9fa81adaf2298       kube-scheduler-ha-202151            kube-system
	f70584959dd02       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  2                   5cb308ab59abd       kube-vip-ha-202151                  kube-system
	
	
	==> coredns [5b41f976d94aab2a66d015407415d4106cf8778628764f4904a5062779241af6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cc242e356e74c1c82ae80013999351dff6fb19a83d4a91a90cd125e034418779] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-202151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T01_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:36:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-202151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                7edb1e1f-1b17-415f-9229-48ba3527eefe
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hw4rm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-4s6qf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 coredns-66bc5c9577-km6lq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 etcd-ha-202151                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         23m
	  kube-system                 kindnet-7b5wx                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-202151             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-202151    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-5gdc5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-202151             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-202151                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m35s                  kube-proxy       
	  Normal   Starting                 9m39s                  kube-proxy       
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     23m (x8 over 23m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23m                    kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   Starting                 23m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 23m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  23m                    kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m                    kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           22m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeReady                22m                    kubelet          Node ha-202151 status is now: NodeReady
	  Normal   RegisteredNode           21m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m38s                  node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           9m3s                   node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   Starting                 7m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     7m47s (x8 over 7m48s)  kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m47s (x8 over 7m48s)  kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m47s (x8 over 7m48s)  kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           5m5s                   node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	
	
	Name:               ha-202151-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_13_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:13:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-202151-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                04eb29d0-5ea5-46d1-ae46-afe3ee374602
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rz794                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 etcd-ha-202151-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-nt6qx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-202151-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-202151-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-hp525                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-202151-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-202151-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9m24s              kube-proxy       
	  Normal   Starting                 22m                kube-proxy       
	  Normal   RegisteredNode           22m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           22m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             18m                node-controller  Node ha-202151-m02 status is now: NodeNotReady
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-202151-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           9m38s              node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           9m37s              node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           9m3s               node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           5m5s               node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             4m15s              node-controller  Node ha-202151-m02 status is now: NodeNotReady
	
	
	Name:               ha-202151-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:16:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-202151-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                84c842f9-c3a2-4245-b176-e32c4cbe3e2c
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2d7p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kindnet-cntp7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-proxy-kqgdw            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 8m50s                 kube-proxy       
	  Normal   Starting                 20m                   kube-proxy       
	  Normal   NodeHasSufficientPID     20m (x3 over 20m)     kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     20m                   cidrAllocator    Node ha-202151-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           20m                   node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeHasSufficientMemory  20m (x3 over 20m)     kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x3 over 20m)     kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m                   node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           20m                   node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeReady                19m                   kubelet          Node ha-202151-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m38s                 node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           9m37s                 node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   Starting                 9m12s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m12s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m9s (x8 over 9m12s)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m9s (x8 over 9m12s)  kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m9s (x8 over 9m12s)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m3s                  node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           5m5s                  node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeNotReady             4m15s                 node-controller  Node ha-202151-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	[Dec17 01:12] overlayfs: idmapped layers are currently not supported
	[Dec17 01:13] overlayfs: idmapped layers are currently not supported
	[Dec17 01:14] overlayfs: idmapped layers are currently not supported
	[Dec17 01:16] overlayfs: idmapped layers are currently not supported
	[Dec17 01:17] overlayfs: idmapped layers are currently not supported
	[Dec17 01:19] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:26] overlayfs: idmapped layers are currently not supported
	[  +3.428919] overlayfs: idmapped layers are currently not supported
	[ +34.914517] overlayfs: idmapped layers are currently not supported
	[Dec17 01:27] overlayfs: idmapped layers are currently not supported
	[Dec17 01:28] overlayfs: idmapped layers are currently not supported
	[  +3.208371] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c] <==
	{"level":"info","ts":"2025-12-17T01:30:14.263103Z","caller":"traceutil/trace.go:172","msg":"trace[949367018] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-4s6qf; range_end:; response_count:1; response_revision:3412; }","duration":"104.813001ms","start":"2025-12-17T01:30:14.158280Z","end":"2025-12-17T01:30:14.263093Z","steps":["trace[949367018] 'agreement among raft nodes before linearized reading'  (duration: 104.719317ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.263447Z","caller":"traceutil/trace.go:172","msg":"trace[1053355413] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:3412; }","duration":"105.36858ms","start":"2025-12-17T01:30:14.158070Z","end":"2025-12-17T01:30:14.263439Z","steps":["trace[1053355413] 'agreement among raft nodes before linearized reading'  (duration: 105.337525ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.263678Z","caller":"traceutil/trace.go:172","msg":"trace[1642161222] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:3412; }","duration":"105.615171ms","start":"2025-12-17T01:30:14.158056Z","end":"2025-12-17T01:30:14.263671Z","steps":["trace[1642161222] 'agreement among raft nodes before linearized reading'  (duration: 105.574171ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.263977Z","caller":"traceutil/trace.go:172","msg":"trace[1375962484] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:3412; }","duration":"105.938134ms","start":"2025-12-17T01:30:14.158032Z","end":"2025-12-17T01:30:14.263970Z","steps":["trace[1375962484] 'agreement among raft nodes before linearized reading'  (duration: 105.887731ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264496Z","caller":"traceutil/trace.go:172","msg":"trace[240166330] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:3412; }","duration":"106.723615ms","start":"2025-12-17T01:30:14.157763Z","end":"2025-12-17T01:30:14.264487Z","steps":["trace[240166330] 'agreement among raft nodes before linearized reading'  (duration: 106.683485ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264648Z","caller":"traceutil/trace.go:172","msg":"trace[801862479] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:3412; }","duration":"106.901646ms","start":"2025-12-17T01:30:14.157741Z","end":"2025-12-17T01:30:14.264642Z","steps":["trace[801862479] 'agreement among raft nodes before linearized reading'  (duration: 106.880281ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264713Z","caller":"traceutil/trace.go:172","msg":"trace[1298748005] range","detail":"{range_begin:/registry/resourceslices/; range_end:/registry/resourceslices0; response_count:0; response_revision:3412; }","duration":"106.990711ms","start":"2025-12-17T01:30:14.157718Z","end":"2025-12-17T01:30:14.264709Z","steps":["trace[1298748005] 'agreement among raft nodes before linearized reading'  (duration: 106.971667ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264867Z","caller":"traceutil/trace.go:172","msg":"trace[1872430785] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:4; response_revision:3412; }","duration":"107.168462ms","start":"2025-12-17T01:30:14.157694Z","end":"2025-12-17T01:30:14.264862Z","steps":["trace[1872430785] 'agreement among raft nodes before linearized reading'  (duration: 107.100657ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265008Z","caller":"traceutil/trace.go:172","msg":"trace[546890442] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:2; response_revision:3412; }","duration":"107.336868ms","start":"2025-12-17T01:30:14.157666Z","end":"2025-12-17T01:30:14.265003Z","steps":["trace[546890442] 'agreement among raft nodes before linearized reading'  (duration: 107.286695ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265113Z","caller":"traceutil/trace.go:172","msg":"trace[160706393] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:3412; }","duration":"107.464209ms","start":"2025-12-17T01:30:14.157644Z","end":"2025-12-17T01:30:14.265109Z","steps":["trace[160706393] 'agreement among raft nodes before linearized reading'  (duration: 107.444935ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265358Z","caller":"traceutil/trace.go:172","msg":"trace[60800954] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:59; response_revision:3412; }","duration":"107.734782ms","start":"2025-12-17T01:30:14.157618Z","end":"2025-12-17T01:30:14.265353Z","steps":["trace[60800954] 'agreement among raft nodes before linearized reading'  (duration: 107.570996ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265491Z","caller":"traceutil/trace.go:172","msg":"trace[531992615] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:3412; }","duration":"107.945895ms","start":"2025-12-17T01:30:14.157540Z","end":"2025-12-17T01:30:14.265486Z","steps":["trace[531992615] 'agreement among raft nodes before linearized reading'  (duration: 107.925047ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265658Z","caller":"traceutil/trace.go:172","msg":"trace[1390021984] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:21; response_revision:3412; }","duration":"117.155252ms","start":"2025-12-17T01:30:14.148497Z","end":"2025-12-17T01:30:14.265652Z","steps":["trace[1390021984] 'agreement among raft nodes before linearized reading'  (duration: 117.062208ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265769Z","caller":"traceutil/trace.go:172","msg":"trace[679095335] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:3412; }","duration":"117.281746ms","start":"2025-12-17T01:30:14.148481Z","end":"2025-12-17T01:30:14.265763Z","steps":["trace[679095335] 'agreement among raft nodes before linearized reading'  (duration: 117.263704ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265857Z","caller":"traceutil/trace.go:172","msg":"trace[372052167] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:3412; }","duration":"117.442719ms","start":"2025-12-17T01:30:14.148409Z","end":"2025-12-17T01:30:14.265852Z","steps":["trace[372052167] 'agreement among raft nodes before linearized reading'  (duration: 117.422576ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265911Z","caller":"traceutil/trace.go:172","msg":"trace[1549549526] range","detail":"{range_begin:/registry/resourceslices; range_end:; response_count:0; response_revision:3412; }","duration":"117.523661ms","start":"2025-12-17T01:30:14.148383Z","end":"2025-12-17T01:30:14.265907Z","steps":["trace[1549549526] 'agreement among raft nodes before linearized reading'  (duration: 117.510049ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266022Z","caller":"traceutil/trace.go:172","msg":"trace[665321617] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:4; response_revision:3412; }","duration":"117.650928ms","start":"2025-12-17T01:30:14.148366Z","end":"2025-12-17T01:30:14.266017Z","steps":["trace[665321617] 'agreement among raft nodes before linearized reading'  (duration: 117.604168ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266120Z","caller":"traceutil/trace.go:172","msg":"trace[1222872720] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:3412; }","duration":"117.770604ms","start":"2025-12-17T01:30:14.148345Z","end":"2025-12-17T01:30:14.266115Z","steps":["trace[1222872720] 'agreement among raft nodes before linearized reading'  (duration: 117.753686ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266201Z","caller":"traceutil/trace.go:172","msg":"trace[1508353187] range","detail":"{range_begin:/registry/endpointslices; range_end:; response_count:0; response_revision:3412; }","duration":"117.875611ms","start":"2025-12-17T01:30:14.148322Z","end":"2025-12-17T01:30:14.266197Z","steps":["trace[1508353187] 'agreement among raft nodes before linearized reading'  (duration: 117.858676ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266280Z","caller":"traceutil/trace.go:172","msg":"trace[2115891653] range","detail":"{range_begin:/registry/csistoragecapacities; range_end:; response_count:0; response_revision:3412; }","duration":"117.996568ms","start":"2025-12-17T01:30:14.148279Z","end":"2025-12-17T01:30:14.266275Z","steps":["trace[2115891653] 'agreement among raft nodes before linearized reading'  (duration: 117.980987ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266366Z","caller":"traceutil/trace.go:172","msg":"trace[468403184] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:3412; }","duration":"118.102411ms","start":"2025-12-17T01:30:14.148259Z","end":"2025-12-17T01:30:14.266361Z","steps":["trace[468403184] 'agreement among raft nodes before linearized reading'  (duration: 118.084738ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266493Z","caller":"traceutil/trace.go:172","msg":"trace[2046334447] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:7; response_revision:3412; }","duration":"118.248303ms","start":"2025-12-17T01:30:14.148241Z","end":"2025-12-17T01:30:14.266489Z","steps":["trace[2046334447] 'agreement among raft nodes before linearized reading'  (duration: 118.18643ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266598Z","caller":"traceutil/trace.go:172","msg":"trace[230986433] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:3412; }","duration":"118.372953ms","start":"2025-12-17T01:30:14.148220Z","end":"2025-12-17T01:30:14.266593Z","steps":["trace[230986433] 'agreement among raft nodes before linearized reading'  (duration: 118.353868ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266682Z","caller":"traceutil/trace.go:172","msg":"trace[1301493726] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:3412; }","duration":"118.481643ms","start":"2025-12-17T01:30:14.148196Z","end":"2025-12-17T01:30:14.266678Z","steps":["trace[1301493726] 'agreement among raft nodes before linearized reading'  (duration: 118.465537ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266784Z","caller":"traceutil/trace.go:172","msg":"trace[922218029] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:11; response_revision:3412; }","duration":"118.706442ms","start":"2025-12-17T01:30:14.148074Z","end":"2025-12-17T01:30:14.266780Z","steps":["trace[922218029] 'agreement among raft nodes before linearized reading'  (duration: 118.64086ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:36:18 up  7:18,  0 user,  load average: 0.79, 1.35, 1.52
	Linux ha-202151 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [421b902e0a04a8b9de33dba40eff9de2915e948b549831a023a55f14ab43a351] <==
	I1217 01:35:31.941448       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:35:41.944537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:35:41.944671       1 main.go:301] handling current node
	I1217 01:35:41.944712       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:35:41.944743       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:35:41.944930       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:35:41.944972       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:35:51.945291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:35:51.945466       1 main.go:301] handling current node
	I1217 01:35:51.945509       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:35:51.945541       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:35:51.945702       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:35:51.945745       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:36:01.941685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:36:01.941720       1 main.go:301] handling current node
	I1217 01:36:01.941737       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:36:01.941744       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:36:01.941941       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:36:01.941956       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:36:11.945404       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:36:11.945503       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:36:11.945683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:36:11.945723       1 main.go:301] handling current node
	I1217 01:36:11.945774       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:36:11.945806       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43] <==
	{"level":"warn","ts":"2025-12-17T01:30:14.097955Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a885a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098017Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002e254a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098226Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098431Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c61680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098550Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a21e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098649Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002813860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098715Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021443c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100260Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002913860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100450Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002114960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001752b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002912d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.101157Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a9c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.108687Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1217 01:30:14.109232       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1217 01:30:14.109341       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111281       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111377       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.112738       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.651626ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	{"level":"warn","ts":"2025-12-17T01:30:14.178037Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000eec000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I1217 01:30:20.949098       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1217 01:30:43.911399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1217 01:31:13.533495       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 01:32:03.642642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 01:32:03.692026       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317] <==
	I1217 01:30:11.991091       1 serving.go:386] Generated self-signed cert in-memory
	I1217 01:30:13.217832       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1217 01:30:13.217864       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:13.219443       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 01:30:13.219569       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 01:30:13.220274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 01:30:13.220329       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 01:30:24.189762       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee] <==
	E1217 01:31:33.506016       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506049       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506057       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506063       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506069       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506199       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506340       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506373       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506405       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506437       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	I1217 01:31:53.524733       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.571989       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.572097       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.606958       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.607067       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646154       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646268       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695195       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695310       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742527       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742634       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785957       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785994       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:31:53.833471       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:32:03.448660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-202151-m04"
	
	
	==> kube-proxy [4f3ffacfcf52c27d4a48be1c9762e97d9c8b2f9eff204b9108c451da8b2defab] <==
	E1217 01:28:51.112803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:58.124554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:10.248785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:26.153294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:30:07.912871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1217 01:30:42.899769       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 01:30:42.899808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 01:30:42.899895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 01:30:42.921440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 01:30:42.921510       1 server_linux.go:132] "Using iptables Proxier"
	I1217 01:30:42.927648       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 01:30:42.928009       1 server.go:527] "Version info" version="v1.34.2"
	I1217 01:30:42.928034       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:42.931509       1 config.go:106] "Starting endpoint slice config controller"
	I1217 01:30:42.931589       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 01:30:42.931909       1 config.go:200] "Starting service config controller"
	I1217 01:30:42.931953       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 01:30:42.932968       1 config.go:309] "Starting node config controller"
	I1217 01:30:42.932995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 01:30:42.933003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 01:30:42.933332       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 01:30:42.933352       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 01:30:43.031859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 01:30:43.032046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 01:30:43.033393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e] <==
	E1217 01:28:38.924937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:38.925147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:38.925091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:38.925212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:38.925293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:39.827962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:39.828496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 01:28:39.945026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:39.947443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 01:28:40.059965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 01:28:40.060779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:40.088703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:40.109776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 01:28:40.129468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:40.134968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 01:28:40.195130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 01:28:40.254624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 01:28:40.281191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 01:28:40.314175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 01:28:40.347761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 01:28:40.381360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 01:28:40.463231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 01:28:40.490812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 01:28:40.517370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1217 01:28:41.991837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 01:29:56 ha-202151 kubelet[802]: I1217 01:29:56.984304     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:29:56 ha-202151 kubelet[802]: E1217 01:29:56.984531     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:01 ha-202151 kubelet[802]: E1217 01:30:01.439578     802 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-202151)" interval="400ms"
	Dec 17 01:30:02 ha-202151 kubelet[802]: E1217 01:30:02.001281     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:10 ha-202151 kubelet[802]: I1217 01:30:10.983522     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:11 ha-202151 kubelet[802]: E1217 01:30:11.841503     802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	Dec 17 01:30:12 ha-202151 kubelet[802]: E1217 01:30:12.002934     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.438401     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.439109     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:24 ha-202151 kubelet[802]: E1217 01:30:24.439355     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.449813     802 scope.go:117] "RemoveContainer" containerID="61c769055e2e33178655adbc6de856c58722cb4c70738c4d94a535d730bf75c6"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.450264     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:27 ha-202151 kubelet[802]: E1217 01:30:27.450420     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:29 ha-202151 kubelet[802]: I1217 01:30:29.966353     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:29 ha-202151 kubelet[802]: E1217 01:30:29.966538     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:34 ha-202151 kubelet[802]: I1217 01:30:34.175661     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:34 ha-202151 kubelet[802]: E1217 01:30:34.175845     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:38 ha-202151 kubelet[802]: I1217 01:30:38.984627     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:38 ha-202151 kubelet[802]: E1217 01:30:38.985748     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:47 ha-202151 kubelet[802]: I1217 01:30:47.984399     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:47 ha-202151 kubelet[802]: E1217 01:30:47.984633     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:52 ha-202151 kubelet[802]: I1217 01:30:52.985253     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:58 ha-202151 kubelet[802]: I1217 01:30:58.984851     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:58 ha-202151 kubelet[802]: E1217 01:30:58.985050     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:31:09 ha-202151 kubelet[802]: I1217 01:31:09.983912     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-202151 -n ha-202151
helpers_test.go:270: (dbg) Run:  kubectl --context ha-202151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (477.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (6.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-202151" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-202151\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-202151\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-202151\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-202151
helpers_test.go:244: (dbg) docker inspect ha-202151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	        "Created": "2025-12-17T01:12:34.697109094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1225803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:28:24.223784082Z",
	            "FinishedAt": "2025-12-17T01:28:23.510213695Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d-json.log",
	        "Name": "/ha-202151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-202151:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-202151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	                "LowerDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-202151",
	                "Source": "/var/lib/docker/volumes/ha-202151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-202151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-202151",
	                "name.minikube.sigs.k8s.io": "ha-202151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a8bfe290f37deb1c3104d9ab559bda078e71c5706919642a39ad4ea7fcab4f9",
	            "SandboxKey": "/var/run/docker/netns/1a8bfe290f37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33961"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-202151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:fe:96:8f:04:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e224ccab4890fdef242aee82a08ae93dfe44ddd1860f17db152892136a611dec",
	                    "EndpointID": "d9f94b3340492bc0b924fd0e2620aaaaec200a88061066241297f013a7336f77",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-202151",
	                        "0d1af93acb20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-202151 -n ha-202151
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 logs -n 25: (3.056004715s)
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151-m04:/home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp testdata/cp-test.txt ha-202151-m04:/home/docker/cp-test.txt                                                             │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m04.txt │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m04_ha-202151.txt                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151.txt                                                 │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m02 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node start m02 --alsologtostderr -v 5                                                                                      │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │                     │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │                     │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:25 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5                                                                                   │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:27 UTC │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │                     │
	│ node    │ ha-202151 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:27 UTC │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:28 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:28:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:28:23.957919 1225677 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:28:23.958241 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958276 1225677 out.go:374] Setting ErrFile to fd 2...
	I1217 01:28:23.958300 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958577 1225677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:28:23.958999 1225677 out.go:368] Setting JSON to false
	I1217 01:28:23.959883 1225677 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25854,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:28:23.959981 1225677 start.go:143] virtualization:  
	I1217 01:28:23.963109 1225677 out.go:179] * [ha-202151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:28:23.966861 1225677 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:28:23.967008 1225677 notify.go:221] Checking for updates...
	I1217 01:28:23.972825 1225677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:28:23.975704 1225677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:23.978560 1225677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:28:23.981565 1225677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:28:23.984558 1225677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:28:23.987973 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:23.988577 1225677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:28:24.018679 1225677 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:28:24.018817 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.078613 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.06901697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.078731 1225677 docker.go:319] overlay module found
	I1217 01:28:24.081724 1225677 out.go:179] * Using the docker driver based on existing profile
	I1217 01:28:24.084659 1225677 start.go:309] selected driver: docker
	I1217 01:28:24.084679 1225677 start.go:927] validating driver "docker" against &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.084825 1225677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:28:24.084933 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.139102 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.130176461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.139528 1225677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:28:24.139560 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:24.139616 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:24.139662 1225677 start.go:353] cluster config:
	{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.142829 1225677 out.go:179] * Starting "ha-202151" primary control-plane node in "ha-202151" cluster
	I1217 01:28:24.145513 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:24.148343 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:24.151136 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:24.151182 1225677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 01:28:24.151172 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:24.151191 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:24.151281 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:24.151292 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:24.151447 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.170893 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:24.170917 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:24.170932 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:24.170962 1225677 start.go:360] acquireMachinesLock for ha-202151: {Name:mk96d245790ddb7861f0cddd8ac09eba6d29a858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:24.171020 1225677 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-202151"
	I1217 01:28:24.171043 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:24.171052 1225677 fix.go:54] fixHost starting: 
	I1217 01:28:24.171312 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.188404 1225677 fix.go:112] recreateIfNeeded on ha-202151: state=Stopped err=<nil>
	W1217 01:28:24.188458 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:24.191811 1225677 out.go:252] * Restarting existing docker container for "ha-202151" ...
	I1217 01:28:24.191909 1225677 cli_runner.go:164] Run: docker start ha-202151
	I1217 01:28:24.438707 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.459881 1225677 kic.go:430] container "ha-202151" state is running.
	I1217 01:28:24.460741 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:24.487033 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.487599 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:24.487676 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:24.511372 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:24.513726 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:24.513748 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:24.516008 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:28:27.648958 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.648981 1225677 ubuntu.go:182] provisioning hostname "ha-202151"
	I1217 01:28:27.649043 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.671053 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.671376 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.671387 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151 && echo "ha-202151" | sudo tee /etc/hostname
	I1217 01:28:27.816001 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.816128 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.833557 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.833865 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.833885 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:27.968607 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:27.968638 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:27.968669 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:27.968686 1225677 provision.go:84] configureAuth start
	I1217 01:28:27.968751 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:27.986183 1225677 provision.go:143] copyHostCerts
	I1217 01:28:27.986244 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986288 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:27.986301 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986379 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:27.986471 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986493 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:27.986502 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986530 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:27.986576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986601 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:27.986609 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986637 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:27.986687 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151 san=[127.0.0.1 192.168.49.2 ha-202151 localhost minikube]
	I1217 01:28:28.161966 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:28.162074 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:28.162136 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.180162 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.276314 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:28.276374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:28.294399 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:28.294463 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1217 01:28:28.312546 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:28.312611 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:28.329872 1225677 provision.go:87] duration metric: took 361.168151ms to configureAuth
	I1217 01:28:28.329900 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:28.330141 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:28.330260 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.347687 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:28.348017 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:28.348037 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:28.719002 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:28.719025 1225677 machine.go:97] duration metric: took 4.231409969s to provisionDockerMachine
	I1217 01:28:28.719036 1225677 start.go:293] postStartSetup for "ha-202151" (driver="docker")
	I1217 01:28:28.719047 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:28.719106 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:28.719158 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.741197 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.836254 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:28.839569 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:28.839599 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:28.839611 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:28.839667 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:28.839747 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:28.839758 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:28.839856 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:28.847310 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:28.864518 1225677 start.go:296] duration metric: took 145.466453ms for postStartSetup
	I1217 01:28:28.864667 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:28.864709 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.882572 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.974073 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:28.979262 1225677 fix.go:56] duration metric: took 4.808204011s for fixHost
	I1217 01:28:28.979289 1225677 start.go:83] releasing machines lock for "ha-202151", held for 4.808256014s
	I1217 01:28:28.979366 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:29.000545 1225677 ssh_runner.go:195] Run: cat /version.json
	I1217 01:28:29.000593 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:29.000605 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.000678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.017863 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.030045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.205586 1225677 ssh_runner.go:195] Run: systemctl --version
	I1217 01:28:29.212211 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:29.247878 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:29.252247 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:29.252372 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:29.260987 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:29.261012 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:29.261044 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:29.261091 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:29.276500 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:29.289977 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:29.290113 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:29.306150 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:29.319359 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:29.442260 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:29.554130 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:29.554229 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:29.569409 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:29.582225 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:29.693269 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:29.815821 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:29.829762 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:29.843587 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:29.843675 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.852929 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:29.853026 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.862094 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.870988 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.879860 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:29.888714 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.897427 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.906242 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.915392 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:29.923247 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:29.930867 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.085763 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:28:30.268466 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:28:30.268540 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:28:30.272645 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:28:30.272717 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:28:30.276359 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:28:30.302094 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:28:30.302194 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.329875 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.364988 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:28:30.367851 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:28:30.383155 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:28:30.387105 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.397488 1225677 kubeadm.go:884] updating cluster {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:28:30.397642 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:30.397701 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.434465 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.434490 1225677 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:28:30.434546 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.461597 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.461622 1225677 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:28:30.461631 1225677 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 01:28:30.461733 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:28:30.461815 1225677 ssh_runner.go:195] Run: crio config
	I1217 01:28:30.524993 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:30.525016 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:30.525041 1225677 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:28:30.525063 1225677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-202151 NodeName:ha-202151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:28:30.525197 1225677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-202151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:28:30.525219 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:28:30.525269 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:28:30.537247 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:30.537359 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:28:30.537423 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:28:30.545256 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:28:30.545330 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 01:28:30.553189 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 01:28:30.566160 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:28:30.579061 1225677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 01:28:30.591667 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:28:30.604079 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:28:30.607859 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.617660 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.737827 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:28:30.755642 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.2
	I1217 01:28:30.755663 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:28:30.755694 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:30.755839 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:28:30.755906 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:28:30.755919 1225677 certs.go:257] generating profile certs ...
	I1217 01:28:30.755998 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:28:30.756031 1225677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698
	I1217 01:28:30.756050 1225677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1217 01:28:31.070955 1225677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 ...
	I1217 01:28:31.071062 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698: {Name:mke1b333e19e123d757f2361ffab64b3ce630ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071323 1225677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 ...
	I1217 01:28:31.071369 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698: {Name:mk12d8ef8dbb1ef8ff84c5ba8c83b430a9515230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071553 1225677 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:28:31.071777 1225677 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:28:31.071982 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:28:31.072020 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:28:31.072053 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:28:31.072099 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:28:31.072142 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:28:31.072179 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:28:31.072222 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:28:31.072260 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:28:31.072291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:28:31.072379 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:28:31.072496 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:28:31.072540 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:28:31.072623 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:28:31.072699 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:28:31.072755 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:28:31.072888 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:31.072995 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.073038 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.073074 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.073717 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:28:31.098054 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:28:31.121354 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:28:31.140746 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:28:31.159713 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:28:31.178284 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:28:31.196338 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:28:31.214382 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:28:31.231910 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:28:31.249283 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:28:31.267150 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:28:31.284464 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:28:31.297370 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:28:31.303511 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.310796 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:28:31.318435 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322279 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322380 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.363578 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:28:31.371139 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.378596 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:28:31.385983 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389802 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389911 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.449546 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:28:31.463605 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.474127 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:28:31.484475 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489596 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489713 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.551435 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:28:31.559450 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:28:31.573170 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:28:31.639157 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:28:31.715122 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:28:31.783477 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:28:31.844822 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:28:31.905215 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:28:31.967945 1225677 kubeadm.go:401] StartCluster: {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:31.968163 1225677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:28:31.968241 1225677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:28:32.018626 1225677 cri.go:89] found id: "9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c"
	I1217 01:28:32.018691 1225677 cri.go:89] found id: "b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43"
	I1217 01:28:32.018711 1225677 cri.go:89] found id: "d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e"
	I1217 01:28:32.018735 1225677 cri.go:89] found id: "f70584959dd02aedc5247d28de369b3dfbec762797364a5b46746119bcd380ba"
	I1217 01:28:32.018753 1225677 cri.go:89] found id: "82cc4882889dc4d930d89f36ac77114d0161f4172216bc47431b8697c0630be5"
	I1217 01:28:32.018781 1225677 cri.go:89] found id: ""
	I1217 01:28:32.018853 1225677 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 01:28:32.044061 1225677 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T01:28:32Z" level=error msg="open /run/runc: no such file or directory"
	I1217 01:28:32.044185 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:28:32.052950 1225677 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:28:32.053010 1225677 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:28:32.053080 1225677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:28:32.061188 1225677 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:32.061654 1225677 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-202151" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.061797 1225677 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-202151" cluster setting kubeconfig missing "ha-202151" context setting]
	I1217 01:28:32.062106 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.062698 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:28:32.063465 1225677 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:28:32.063546 1225677 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:28:32.063583 1225677 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:28:32.063613 1225677 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:28:32.063651 1225677 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:28:32.063976 1225677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:28:32.063525 1225677 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 01:28:32.081817 1225677 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 01:28:32.081837 1225677 kubeadm.go:602] duration metric: took 28.80443ms to restartPrimaryControlPlane
	I1217 01:28:32.081846 1225677 kubeadm.go:403] duration metric: took 113.913079ms to StartCluster
	I1217 01:28:32.081861 1225677 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.081919 1225677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.082486 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.082669 1225677 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:28:32.082688 1225677 start.go:242] waiting for startup goroutines ...
	I1217 01:28:32.082706 1225677 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:28:32.083152 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.086942 1225677 out.go:179] * Enabled addons: 
	I1217 01:28:32.089944 1225677 addons.go:530] duration metric: took 7.236595ms for enable addons: enabled=[]
	I1217 01:28:32.089983 1225677 start.go:247] waiting for cluster config update ...
	I1217 01:28:32.089992 1225677 start.go:256] writing updated cluster config ...
	I1217 01:28:32.093327 1225677 out.go:203] 
	I1217 01:28:32.096604 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.096790 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.100238 1225677 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	I1217 01:28:32.103257 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:32.106243 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:32.109227 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:32.109291 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:32.109420 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:32.109454 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:32.109592 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.109854 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:32.139073 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:32.139092 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:32.139106 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:32.139130 1225677 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:32.139181 1225677 start.go:364] duration metric: took 36.692µs to acquireMachinesLock for "ha-202151-m02"
	I1217 01:28:32.139199 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:32.139204 1225677 fix.go:54] fixHost starting: m02
	I1217 01:28:32.139463 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.170663 1225677 fix.go:112] recreateIfNeeded on ha-202151-m02: state=Stopped err=<nil>
	W1217 01:28:32.170689 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:32.173829 1225677 out.go:252] * Restarting existing docker container for "ha-202151-m02" ...
	I1217 01:28:32.173910 1225677 cli_runner.go:164] Run: docker start ha-202151-m02
	I1217 01:28:32.543486 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.572710 1225677 kic.go:430] container "ha-202151-m02" state is running.
	I1217 01:28:32.573066 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:32.602951 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.603208 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:32.603266 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:32.629641 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:32.629950 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:32.629959 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:32.630596 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37710->127.0.0.1:33963: read: connection reset by peer
	I1217 01:28:35.808896 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:35.808924 1225677 ubuntu.go:182] provisioning hostname "ha-202151-m02"
	I1217 01:28:35.808996 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:35.842137 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:35.842447 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:35.842466 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
	I1217 01:28:36.038050 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:36.038178 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.082250 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:36.082569 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:36.082593 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:36.332805 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:36.332901 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:36.332944 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:36.332991 1225677 provision.go:84] configureAuth start
	I1217 01:28:36.333104 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:36.366101 1225677 provision.go:143] copyHostCerts
	I1217 01:28:36.366154 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366188 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:36.366198 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366291 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:36.366454 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366479 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:36.366484 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366514 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:36.366576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366600 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:36.366604 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366636 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:36.366685 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
	I1217 01:28:36.714448 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:36.714609 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:36.714700 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.737234 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:36.864039 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:36.864124 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:36.913291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:36.913360 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:28:36.977060 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:36.977210 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:37.077043 1225677 provision.go:87] duration metric: took 744.017822ms to configureAuth
	I1217 01:28:37.077119 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:37.077458 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:37.077641 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:37.114203 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:37.114614 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:37.114630 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:38.749167 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:38.749190 1225677 machine.go:97] duration metric: took 6.145972988s to provisionDockerMachine
	I1217 01:28:38.749202 1225677 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
	I1217 01:28:38.749218 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:38.749280 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:38.749320 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.798164 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:38.934750 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:38.938751 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:38.938784 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:38.938805 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:38.938890 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:38.939022 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:38.939035 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:38.939161 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:38.949374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:38.977662 1225677 start.go:296] duration metric: took 228.444359ms for postStartSetup
	I1217 01:28:38.977768 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:38.977833 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.997045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.094589 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:39.100157 1225677 fix.go:56] duration metric: took 6.9609442s for fixHost
	I1217 01:28:39.100185 1225677 start.go:83] releasing machines lock for "ha-202151-m02", held for 6.960996095s
	I1217 01:28:39.100277 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:39.121509 1225677 out.go:179] * Found network options:
	I1217 01:28:39.124537 1225677 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 01:28:39.127500 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:28:39.127546 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 01:28:39.127633 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:39.127678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.127731 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:39.127813 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.159911 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.160356 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.389362 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:39.518196 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:39.518280 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:39.530690 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:39.530730 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:39.530766 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:39.530828 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:39.559452 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:39.590703 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:39.590778 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:39.623053 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:39.646277 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:39.924657 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:40.211696 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:40.211818 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:40.234789 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:40.255311 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:40.483522 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:40.697787 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:40.728627 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:40.773025 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:40.773101 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.810962 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:40.811053 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.830095 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.843899 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.859512 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:40.875469 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.891423 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.906705 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.920139 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:40.935324 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:40.949872 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:41.265195 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:11.765812 1225677 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.500580562s)
	I1217 01:30:11.765836 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:11.765895 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:11.773685 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:30:11.773748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:30:11.777914 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:30:11.832219 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:30:11.832561 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.883307 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.931713 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:30:11.934749 1225677 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 01:30:11.937773 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:30:11.958180 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:11.963975 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:11.980941 1225677 mustload.go:66] Loading cluster: ha-202151
	I1217 01:30:11.981196 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:11.981523 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:30:12.010212 1225677 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:30:12.010538 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
	I1217 01:30:12.010547 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:30:12.010562 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:12.010679 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:30:12.010721 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:30:12.010729 1225677 certs.go:257] generating profile certs ...
	I1217 01:30:12.010806 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:30:12.010871 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730
	I1217 01:30:12.010909 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:30:12.010918 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:30:12.010930 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:30:12.010942 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:30:12.010952 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:30:12.010963 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:30:12.010976 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:30:12.010988 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:30:12.010998 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:30:12.011046 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:30:12.011099 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:12.011108 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:30:12.011142 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:30:12.011167 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:12.011226 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:30:12.011276 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:30:12.011308 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.011330 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.011341 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.011405 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:30:12.040530 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:30:12.140835 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:30:12.145679 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:30:12.155103 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:30:12.158946 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:30:12.168468 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:30:12.172730 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:30:12.182622 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:30:12.186892 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:30:12.196428 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:30:12.200769 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:30:12.210174 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:30:12.214229 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:30:12.223408 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:12.242760 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:12.263233 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:12.281118 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:12.299303 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:30:12.317115 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:12.334779 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:12.352592 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:30:12.370481 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:30:12.389095 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:30:12.412594 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:12.449315 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:30:12.473400 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:30:12.494693 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:30:12.517806 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:30:12.543454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:30:12.563454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:30:12.583785 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:30:12.603782 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:30:12.611317 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.622461 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:30:12.631322 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635830 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635962 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.683099 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:12.692252 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.701723 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:12.714594 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719579 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719716 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.763558 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:12.772848 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.782803 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:30:12.792174 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.797950 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.798068 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.843461 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:12.852350 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:12.856738 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:12.902677 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:12.948658 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:12.994789 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:13.042684 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:13.096054 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:13.158401 1225677 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1217 01:30:13.158570 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:13.158615 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:30:13.158706 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:30:13.173582 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:30:13.173705 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:30:13.173834 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:13.183901 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:13.184021 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:30:13.192889 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:30:13.208806 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:13.224983 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:30:13.240987 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:13.245030 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:13.255387 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.401843 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.417093 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:13.416720 1225677 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:30:13.423303 1225677 out.go:179] * Verifying Kubernetes components...
	I1217 01:30:13.426149 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.647974 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.667990 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:30:13.668105 1225677 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:30:13.668438 1225677 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201323 1225677 node_ready.go:49] node "ha-202151-m02" is "Ready"
	I1217 01:30:14.201352 1225677 node_ready.go:38] duration metric: took 532.861298ms for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201366 1225677 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:30:14.201430 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:14.702397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.202165 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.701679 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.202436 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.701593 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.202167 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.702134 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.201871 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.202178 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.702421 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.201608 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.701963 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.201849 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.702468 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.201659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.702284 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.202447 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.701767 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.201870 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.701725 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.202161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.701566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.201668 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.702034 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.202090 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.201787 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.701530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.202044 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.702049 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.202554 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.201868 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.702179 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.202396 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.702380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.701675 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.201765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.701936 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.201563 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.701569 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.202228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.702471 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.201812 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.701808 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.201588 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.701513 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.202142 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.701610 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.201867 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.702427 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.202172 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.202404 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.701704 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.201454 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.702205 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.201850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.702118 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.201665 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.702497 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.201634 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.701590 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.202217 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.202252 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.701540 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.702332 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.202380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.701545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.202215 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.701654 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.202277 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.701599 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.202236 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.702370 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.201552 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.702331 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.201545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.202549 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.202225 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.701571 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.202016 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.702392 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.212791 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.701639 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.202292 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.701781 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.201523 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.701618 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.201666 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.702192 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.202218 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.701749 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.201582 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.701583 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.201568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.702305 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.202030 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.702244 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.201601 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.702328 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.202314 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.701594 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.202413 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.701574 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.201566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.702440 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.701568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.202474 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.701537 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:13.701628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:13.737091 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:13.737114 1225677 cri.go:89] found id: ""
	I1217 01:31:13.737124 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:13.737180 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.741133 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:13.741205 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:13.767828 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:13.767849 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:13.767854 1225677 cri.go:89] found id: ""
	I1217 01:31:13.767861 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:13.767916 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.772125 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.775836 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:13.775913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:13.807345 1225677 cri.go:89] found id: ""
	I1217 01:31:13.807369 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.807377 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:13.807384 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:13.807444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:13.838797 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:13.838817 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:13.838821 1225677 cri.go:89] found id: ""
	I1217 01:31:13.838829 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:13.838887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.843081 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.846896 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:13.846969 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:13.886939 1225677 cri.go:89] found id: ""
	I1217 01:31:13.886968 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.886977 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:13.886983 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:13.887045 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:13.927324 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:13.927350 1225677 cri.go:89] found id: ""
	I1217 01:31:13.927359 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:13.927418 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.932191 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:13.932281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:13.963576 1225677 cri.go:89] found id: ""
	I1217 01:31:13.963605 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.963614 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:13.963623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:13.963636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:14.061267 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:14.061313 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:14.083208 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:14.083318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:14.113297 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:14.113328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:14.168503 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:14.168540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:14.225258 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:14.225299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:14.254658 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:14.254688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:14.329954 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:14.329994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:14.363830 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:14.363859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:14.780185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:14.780213 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:14.780229 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:14.821746 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:14.821787 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.348276 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:17.359506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:17.359576 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:17.385494 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.385522 1225677 cri.go:89] found id: ""
	I1217 01:31:17.385531 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:17.385587 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.389291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:17.389381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:17.417467 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:17.417488 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:17.417493 1225677 cri.go:89] found id: ""
	I1217 01:31:17.417501 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:17.417557 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.421553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.425305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:17.425381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:17.452893 1225677 cri.go:89] found id: ""
	I1217 01:31:17.452925 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.452935 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:17.452945 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:17.453003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:17.479708 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.479730 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.479736 1225677 cri.go:89] found id: ""
	I1217 01:31:17.479743 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:17.479799 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.484009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.487543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:17.487617 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:17.522723 1225677 cri.go:89] found id: ""
	I1217 01:31:17.522751 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.522760 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:17.522767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:17.522829 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:17.550998 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.551023 1225677 cri.go:89] found id: ""
	I1217 01:31:17.551032 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:17.551086 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.554682 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:17.554767 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:17.587610 1225677 cri.go:89] found id: ""
	I1217 01:31:17.587650 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.587659 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:17.587684 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:17.587709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.616971 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:17.617002 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:17.692991 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:17.693034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:17.741052 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:17.741081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:17.761199 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:17.761228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.792936 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:17.793007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.845716 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:17.845753 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.881065 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:17.881096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:17.982043 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:17.982082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:18.070492 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:18.070517 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:18.070531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:18.117818 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:18.117911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.668542 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:20.679148 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:20.679242 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:20.706664 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:20.706687 1225677 cri.go:89] found id: ""
	I1217 01:31:20.706697 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:20.706757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.711072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:20.711147 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:20.737754 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:20.737779 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.737784 1225677 cri.go:89] found id: ""
	I1217 01:31:20.737792 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:20.737847 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.741755 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.745506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:20.745577 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:20.778364 1225677 cri.go:89] found id: ""
	I1217 01:31:20.778386 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.778394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:20.778400 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:20.778458 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:20.807237 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.807262 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:20.807267 1225677 cri.go:89] found id: ""
	I1217 01:31:20.807275 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:20.807361 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.811689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.815755 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:20.815857 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:20.842433 1225677 cri.go:89] found id: ""
	I1217 01:31:20.842454 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.842464 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:20.842470 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:20.842526 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:20.869792 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:20.869821 1225677 cri.go:89] found id: ""
	I1217 01:31:20.869831 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:20.869887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.873765 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:20.873847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:20.900911 1225677 cri.go:89] found id: ""
	I1217 01:31:20.900940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.900952 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:20.900961 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:20.900974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.954883 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:20.954920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:21.002822 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:21.002852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:21.108368 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:21.108406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:21.135557 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:21.135588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:21.176576 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:21.176610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:21.205927 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:21.205961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:21.232870 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:21.232897 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:21.312344 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:21.312377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:21.333806 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:21.333836 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:21.415860 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:21.415895 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:21.415909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:23.961577 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:23.974520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:23.974616 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:24.008513 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.008538 1225677 cri.go:89] found id: ""
	I1217 01:31:24.008548 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:24.008627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.013203 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:24.013311 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:24.041344 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.041369 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.041374 1225677 cri.go:89] found id: ""
	I1217 01:31:24.041383 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:24.041499 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.045778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.049690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:24.049764 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:24.076869 1225677 cri.go:89] found id: ""
	I1217 01:31:24.076902 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.076912 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:24.076919 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:24.076982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:24.115429 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.115504 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.115535 1225677 cri.go:89] found id: ""
	I1217 01:31:24.115571 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:24.115649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.121035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.126165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:24.126286 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:24.153228 1225677 cri.go:89] found id: ""
	I1217 01:31:24.153253 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.153262 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:24.153268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:24.153326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:24.196715 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:24.196801 1225677 cri.go:89] found id: ""
	I1217 01:31:24.196825 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:24.196912 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.201554 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:24.201642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:24.230189 1225677 cri.go:89] found id: ""
	I1217 01:31:24.230214 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.230223 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:24.230232 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:24.230244 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:24.308144 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:24.308188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:24.326634 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:24.326664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:24.400916 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:24.400938 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:24.400952 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.448701 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:24.448743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.482276 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:24.482309 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:24.515534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:24.515567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:24.625661 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:24.625708 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.652399 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:24.652439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.693518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:24.693556 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.750020 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:24.750059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.278748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:27.290609 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:27.290689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:27.316966 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.316991 1225677 cri.go:89] found id: ""
	I1217 01:31:27.316999 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:27.317054 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.320866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:27.320938 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:27.347398 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.347422 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.347427 1225677 cri.go:89] found id: ""
	I1217 01:31:27.347436 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:27.347496 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.351488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.355369 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:27.355442 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:27.381534 1225677 cri.go:89] found id: ""
	I1217 01:31:27.381564 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.381574 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:27.381580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:27.381662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:27.410739 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.410810 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.410822 1225677 cri.go:89] found id: ""
	I1217 01:31:27.410831 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:27.410892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.415095 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.419246 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:27.419364 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:27.447586 1225677 cri.go:89] found id: ""
	I1217 01:31:27.447612 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.447622 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:27.447629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:27.447693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:27.474916 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.474941 1225677 cri.go:89] found id: ""
	I1217 01:31:27.474950 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:27.475035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.479118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:27.479203 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:27.506051 1225677 cri.go:89] found id: ""
	I1217 01:31:27.506078 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.506087 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:27.506097 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:27.506108 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:27.545535 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:27.545568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:27.641749 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:27.641830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:27.661191 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:27.661226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:27.738097 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:27.738120 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:27.738134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.782011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:27.782048 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.834514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:27.834550 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.905140 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:27.905177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.940830 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:27.940862 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.969106 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:27.969136 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.998807 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:27.998835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:30.578811 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:30.590365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:30.590444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:30.618562 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:30.618585 1225677 cri.go:89] found id: ""
	I1217 01:31:30.618594 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:30.618677 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.623874 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:30.624003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:30.654712 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:30.654734 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.654740 1225677 cri.go:89] found id: ""
	I1217 01:31:30.654747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:30.654831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.658663 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.662256 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:30.662333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:30.690956 1225677 cri.go:89] found id: ""
	I1217 01:31:30.690983 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.691000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:30.691008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:30.691073 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:30.720079 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.720104 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.720110 1225677 cri.go:89] found id: ""
	I1217 01:31:30.720118 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:30.720190 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.724290 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.728443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:30.728569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:30.762597 1225677 cri.go:89] found id: ""
	I1217 01:31:30.762665 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.762683 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:30.762690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:30.762769 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:30.793999 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:30.794022 1225677 cri.go:89] found id: ""
	I1217 01:31:30.794031 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:30.794087 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.798031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:30.798111 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:30.825811 1225677 cri.go:89] found id: ""
	I1217 01:31:30.825838 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.825848 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:30.825858 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:30.825900 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.874308 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:30.874349 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.932548 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:30.932596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.973410 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:30.973440 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:31.061854 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:31.061893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:31.081279 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:31.081308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:31.173788 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:31.173816 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:31.173832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:31.203476 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:31.203507 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:31.242819 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:31.242857 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:31.270107 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:31.270137 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:31.301308 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:31.301338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:33.901065 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:33.913301 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:33.913455 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:33.945005 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:33.945033 1225677 cri.go:89] found id: ""
	I1217 01:31:33.945042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:33.945100 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.949030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:33.949099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:33.980996 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:33.981019 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:33.981024 1225677 cri.go:89] found id: ""
	I1217 01:31:33.981032 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:33.981090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.985533 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.989328 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:33.989424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:34.020066 1225677 cri.go:89] found id: ""
	I1217 01:31:34.020105 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.020115 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:34.020123 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:34.020214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:34.054526 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.054551 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.054558 1225677 cri.go:89] found id: ""
	I1217 01:31:34.054566 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:34.054628 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.058716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.062466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:34.062539 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:34.100752 1225677 cri.go:89] found id: ""
	I1217 01:31:34.100777 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.100787 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:34.100794 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:34.100856 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:34.133409 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.133431 1225677 cri.go:89] found id: ""
	I1217 01:31:34.133440 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:34.133498 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.137315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:34.137386 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:34.169015 1225677 cri.go:89] found id: ""
	I1217 01:31:34.169048 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.169058 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:34.169068 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:34.169081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:34.230112 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:34.230152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:34.275030 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:34.275071 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.303312 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:34.303341 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:34.323613 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:34.323791 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.377596 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:34.377632 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.405931 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:34.405961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:34.485309 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:34.485348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:34.537697 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:34.537780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:34.640362 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:34.640409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:34.719202 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:34.719227 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:34.719241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.248692 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:37.259883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:37.259952 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:37.288047 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.288071 1225677 cri.go:89] found id: ""
	I1217 01:31:37.288092 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:37.288147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.291723 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:37.291791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:37.320405 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.320468 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:37.320473 1225677 cri.go:89] found id: ""
	I1217 01:31:37.320481 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:37.320536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.324331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.327725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:37.327795 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:37.353914 1225677 cri.go:89] found id: ""
	I1217 01:31:37.353940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.353949 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:37.353956 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:37.354033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:37.380050 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.380082 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:37.380088 1225677 cri.go:89] found id: ""
	I1217 01:31:37.380097 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:37.380169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.384466 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.388616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:37.388737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:37.434167 1225677 cri.go:89] found id: ""
	I1217 01:31:37.434203 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.434213 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:37.434235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:37.434327 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:37.463397 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.463418 1225677 cri.go:89] found id: ""
	I1217 01:31:37.463426 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:37.463501 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.467357 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:37.467429 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:37.496476 1225677 cri.go:89] found id: ""
	I1217 01:31:37.496504 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.496514 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:37.496523 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:37.496534 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:37.580269 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:37.580312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:37.598989 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:37.599020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:37.669887 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:37.669956 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:37.669985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.696910 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:37.696934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.741514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:37.741546 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.797620 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:37.797657 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.827250 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:37.827277 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:37.860098 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:37.860127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:37.981956 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:37.982003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:38.045819 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:38.045855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.580761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:40.592635 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:40.592708 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:40.620832 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:40.620856 1225677 cri.go:89] found id: ""
	I1217 01:31:40.620866 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:40.620942 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.624827 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:40.624914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:40.662358 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.662381 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.662386 1225677 cri.go:89] found id: ""
	I1217 01:31:40.662394 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:40.662452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.666347 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.669969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:40.670068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:40.698897 1225677 cri.go:89] found id: ""
	I1217 01:31:40.698922 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.698931 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:40.698938 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:40.699026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:40.726184 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.726254 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.726265 1225677 cri.go:89] found id: ""
	I1217 01:31:40.726273 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:40.726331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.730221 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.734070 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:40.734150 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:40.760090 1225677 cri.go:89] found id: ""
	I1217 01:31:40.760116 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.760125 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:40.760185 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:40.760251 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:40.790670 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:40.790693 1225677 cri.go:89] found id: ""
	I1217 01:31:40.790702 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:40.790754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.794861 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:40.794936 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:40.826103 1225677 cri.go:89] found id: ""
	I1217 01:31:40.826129 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.826138 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:40.826147 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:40.826160 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.878987 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:40.879066 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.924714 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:40.924751 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.980944 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:40.980981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:41.072994 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:41.073031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:41.105014 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:41.105042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:41.212780 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:41.212818 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:41.241014 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:41.241042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:41.277652 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:41.277684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:41.308943 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:41.308972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:41.328092 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:41.328123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:41.410133 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:43.911410 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:43.924272 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:43.924351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:43.953227 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:43.953252 1225677 cri.go:89] found id: ""
	I1217 01:31:43.953261 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:43.953337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.957558 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:43.957674 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:43.984394 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:43.984493 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:43.984513 1225677 cri.go:89] found id: ""
	I1217 01:31:43.984547 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:43.984626 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.988727 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.992395 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:43.992531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:44.023165 1225677 cri.go:89] found id: ""
	I1217 01:31:44.023242 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.023265 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:44.023285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:44.023376 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:44.056175 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.056249 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.056268 1225677 cri.go:89] found id: ""
	I1217 01:31:44.056293 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:44.056373 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.060006 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.063548 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:44.063623 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:44.091849 1225677 cri.go:89] found id: ""
	I1217 01:31:44.091875 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.091886 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:44.091892 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:44.091950 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:44.125771 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.125837 1225677 cri.go:89] found id: ""
	I1217 01:31:44.125861 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:44.125938 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.129707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:44.129781 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:44.157267 1225677 cri.go:89] found id: ""
	I1217 01:31:44.157343 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.157359 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:44.157369 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:44.157380 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:44.179921 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:44.180042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:44.227426 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:44.227495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:44.268056 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:44.268089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:44.312908 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:44.312943 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.344639 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:44.344673 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.370623 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:44.370650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:44.400984 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:44.401017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:44.494253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:44.494291 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:44.563778 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:44.563859 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:44.563887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.630776 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:44.630812 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.217775 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:47.228858 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:47.228999 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:47.258264 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.258287 1225677 cri.go:89] found id: ""
	I1217 01:31:47.258305 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:47.258366 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.262265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:47.262366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:47.293485 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.293508 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.293552 1225677 cri.go:89] found id: ""
	I1217 01:31:47.293562 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:47.293623 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.297395 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.300792 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:47.300866 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:47.329792 1225677 cri.go:89] found id: ""
	I1217 01:31:47.329818 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.329827 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:47.329833 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:47.329890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:47.356681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.356747 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:47.356758 1225677 cri.go:89] found id: ""
	I1217 01:31:47.356767 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:47.356839 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.360948 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.364494 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:47.364598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:47.390993 1225677 cri.go:89] found id: ""
	I1217 01:31:47.391021 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.391031 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:47.391037 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:47.391099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:47.417453 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.417517 1225677 cri.go:89] found id: ""
	I1217 01:31:47.417541 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:47.417618 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.421365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:47.421437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:47.447227 1225677 cri.go:89] found id: ""
	I1217 01:31:47.447254 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.447264 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:47.447273 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:47.447285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.474445 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:47.474475 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:47.546929 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:47.546947 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:47.546962 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.621943 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:47.621985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:47.653654 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:47.653679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:47.751509 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:47.751548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:47.773290 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:47.773323 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.802347 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:47.802378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.849646 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:47.849680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.894275 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:47.894315 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.949242 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:47.949281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.480769 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:50.491711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:50.491827 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:50.519320 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.519345 1225677 cri.go:89] found id: ""
	I1217 01:31:50.519353 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:50.519440 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.523424 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:50.523533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:50.551627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:50.551652 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:50.551658 1225677 cri.go:89] found id: ""
	I1217 01:31:50.551665 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:50.551751 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.555585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.559244 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:50.559347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:50.586218 1225677 cri.go:89] found id: ""
	I1217 01:31:50.586241 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.586249 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:50.586255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:50.586333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:50.618629 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.618661 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.618667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.618675 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:50.618776 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.622850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.626687 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:50.626824 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:50.659667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.659703 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.659713 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:50.659738 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:50.659817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:50.686997 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.687069 1225677 cri.go:89] found id: ""
	I1217 01:31:50.687092 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:50.687160 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.690709 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:50.690823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:50.721432 1225677 cri.go:89] found id: ""
	I1217 01:31:50.721509 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.721534 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:50.721553 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:50.721583 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.748223 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:50.748250 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.807290 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:50.807328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.835575 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:50.835603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.861513 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:50.861539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:50.937079 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:50.937118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:51.023701 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:51.023722 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:51.023736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:51.063322 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:51.063360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:51.134936 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:51.134983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:51.172581 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:51.172611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:51.279920 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:51.279958 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:53.800293 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:53.813493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:53.813572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:53.855699 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:53.855727 1225677 cri.go:89] found id: ""
	I1217 01:31:53.855737 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:53.855790 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.860842 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:53.860915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:53.905688 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:53.905715 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:53.905720 1225677 cri.go:89] found id: ""
	I1217 01:31:53.905727 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:53.905796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.911027 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.916033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:53.916105 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:53.971312 1225677 cri.go:89] found id: ""
	I1217 01:31:53.971339 1225677 logs.go:282] 0 containers: []
	W1217 01:31:53.971349 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:53.971356 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:53.971477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:54.021427 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.021456 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:54.021474 1225677 cri.go:89] found id: ""
	I1217 01:31:54.021488 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:54.021585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.030798 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.035177 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:54.035371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:54.113099 1225677 cri.go:89] found id: ""
	I1217 01:31:54.113124 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.113133 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:54.113139 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:54.113246 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:54.166627 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.166651 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.166658 1225677 cri.go:89] found id: ""
	I1217 01:31:54.166665 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:54.166783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.171754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.182182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:54.182283 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:54.234503 1225677 cri.go:89] found id: ""
	I1217 01:31:54.234567 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.234591 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:54.234615 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:54.234642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.275461 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:54.275532 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:54.366758 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:54.366801 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:54.403474 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:54.403513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:54.422090 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:54.422131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:54.486461 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:54.486497 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.553429 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:54.553466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.599563 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:54.599593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:54.706755 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:54.706795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:54.812798 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:54.812822 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:54.812835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:54.838401 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:54.838433 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:54.893784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:54.893823 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.427168 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:57.438551 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:57.438655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:57.468636 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:57.468660 1225677 cri.go:89] found id: ""
	I1217 01:31:57.468669 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:57.468726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.472745 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:57.472819 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:57.500682 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:57.500702 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.500707 1225677 cri.go:89] found id: ""
	I1217 01:31:57.500714 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:57.500777 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.504719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.508458 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:57.508557 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:57.540789 1225677 cri.go:89] found id: ""
	I1217 01:31:57.540813 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.540822 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:57.540828 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:57.540889 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:57.570366 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.570392 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.570398 1225677 cri.go:89] found id: ""
	I1217 01:31:57.570406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:57.570462 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.574531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.578702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:57.578782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:57.608017 1225677 cri.go:89] found id: ""
	I1217 01:31:57.608042 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.608051 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:57.608058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:57.608122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:57.634195 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:57.634218 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.634224 1225677 cri.go:89] found id: ""
	I1217 01:31:57.634232 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:57.634317 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.638339 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.642068 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:57.642166 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:57.669214 1225677 cri.go:89] found id: ""
	I1217 01:31:57.669250 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.669259 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:57.669268 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:57.669284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.733958 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:57.733991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.790688 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:57.790731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.825378 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:57.825409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:57.903425 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:57.903465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:57.977243 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:57.977266 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:57.977280 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:58.008228 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:58.008262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:58.044832 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:58.044861 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:58.076961 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:58.077009 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:58.174022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:58.174061 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:58.194526 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:58.194561 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:58.225629 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:58.225658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.768659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:00.779781 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:00.779855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:00.809961 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:00.809984 1225677 cri.go:89] found id: ""
	I1217 01:32:00.809993 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:00.810055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.814113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:00.814232 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:00.842110 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.842179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:00.842193 1225677 cri.go:89] found id: ""
	I1217 01:32:00.842202 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:00.842259 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.846284 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.850463 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:00.850535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:00.877321 1225677 cri.go:89] found id: ""
	I1217 01:32:00.877347 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.877357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:00.877364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:00.877424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:00.903950 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:00.904025 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:00.904044 1225677 cri.go:89] found id: ""
	I1217 01:32:00.904065 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:00.904183 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.907995 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.911685 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:00.911762 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:00.940826 1225677 cri.go:89] found id: ""
	I1217 01:32:00.940856 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.940865 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:00.940871 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:00.940931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:00.967056 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:00.967077 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:00.967088 1225677 cri.go:89] found id: ""
	I1217 01:32:00.967097 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:00.967175 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.970953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.975717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:00.975791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:01.010237 1225677 cri.go:89] found id: ""
	I1217 01:32:01.010262 1225677 logs.go:282] 0 containers: []
	W1217 01:32:01.010272 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:01.010281 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:01.010294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:01.030320 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:01.030353 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:01.055381 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:01.055409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:01.097515 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:01.097548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:01.166756 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:01.166797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:01.208792 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:01.208824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:01.246024 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:01.246056 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:01.340436 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:01.340519 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:01.412662 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:01.412684 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:01.412699 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:01.467190 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:01.467228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:01.500459 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:01.500486 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:01.531449 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:01.531477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:04.134627 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:04.145902 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:04.145978 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:04.185746 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.185766 1225677 cri.go:89] found id: ""
	I1217 01:32:04.185774 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:04.185831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.189797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:04.189867 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:04.228673 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.228694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.228698 1225677 cri.go:89] found id: ""
	I1217 01:32:04.228706 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:04.228759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.233260 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.238075 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:04.238212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:04.268955 1225677 cri.go:89] found id: ""
	I1217 01:32:04.268983 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.268992 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:04.268999 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:04.269102 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:04.299973 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.300041 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.300061 1225677 cri.go:89] found id: ""
	I1217 01:32:04.300088 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:04.300185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.303813 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.307456 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:04.307533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:04.334293 1225677 cri.go:89] found id: ""
	I1217 01:32:04.334319 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.334331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:04.334338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:04.334398 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:04.360886 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.360906 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.360910 1225677 cri.go:89] found id: ""
	I1217 01:32:04.360918 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:04.360974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.365024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.368933 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:04.369005 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:04.397116 1225677 cri.go:89] found id: ""
	I1217 01:32:04.397140 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.397149 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:04.397159 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:04.397174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:04.490637 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:04.490721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.531861 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:04.531938 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.577801 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:04.577838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.635487 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:04.635524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.667260 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:04.667290 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:04.718117 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:04.718146 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:04.737680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:04.737711 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:04.825872 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:04.825894 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:04.825908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.858804 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:04.858833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.887920 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:04.887953 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.916371 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:04.916476 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:07.492728 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:07.504442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:07.504532 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:07.538372 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.538403 1225677 cri.go:89] found id: ""
	I1217 01:32:07.538442 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:07.538517 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.542523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:07.542597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:07.576339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:07.576360 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:07.576364 1225677 cri.go:89] found id: ""
	I1217 01:32:07.576372 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:07.576459 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.580149 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.584111 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:07.584196 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:07.610578 1225677 cri.go:89] found id: ""
	I1217 01:32:07.610605 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.610614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:07.610621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:07.610678 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:07.637129 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:07.637151 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:07.637157 1225677 cri.go:89] found id: ""
	I1217 01:32:07.637164 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:07.637217 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.641090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.644872 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:07.644992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:07.679300 1225677 cri.go:89] found id: ""
	I1217 01:32:07.679322 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.679331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:07.679350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:07.679419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:07.719129 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:07.719155 1225677 cri.go:89] found id: ""
	I1217 01:32:07.719164 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:07.719231 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.723681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:07.723755 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:07.756924 1225677 cri.go:89] found id: ""
	I1217 01:32:07.756950 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.756969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:07.756979 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:07.756991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:07.856049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:07.856088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:07.935429 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:07.935456 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:07.935469 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.961013 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:07.961042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:08.005989 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:08.006024 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:08.039061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:08.039092 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:08.058159 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:08.058194 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:08.112456 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:08.112490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:08.176389 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:08.176457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:08.215782 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:08.215809 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:08.244713 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:08.244743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:10.828143 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:10.838717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:10.838793 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:10.869672 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:10.869696 1225677 cri.go:89] found id: ""
	I1217 01:32:10.869705 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:10.869761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.873603 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:10.873720 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:10.900811 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:10.900837 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:10.900843 1225677 cri.go:89] found id: ""
	I1217 01:32:10.900851 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:10.900906 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.904643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.908193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:10.908261 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:10.935598 1225677 cri.go:89] found id: ""
	I1217 01:32:10.935624 1225677 logs.go:282] 0 containers: []
	W1217 01:32:10.935634 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:10.935641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:10.935698 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:10.966869 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:10.966894 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:10.966899 1225677 cri.go:89] found id: ""
	I1217 01:32:10.966907 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:10.966962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.970920 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.974605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:10.974715 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:11.012577 1225677 cri.go:89] found id: ""
	I1217 01:32:11.012602 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.012612 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:11.012618 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:11.012680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:11.048075 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.048100 1225677 cri.go:89] found id: ""
	I1217 01:32:11.048130 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:11.048185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:11.052014 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:11.052089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:11.084486 1225677 cri.go:89] found id: ""
	I1217 01:32:11.084511 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.084524 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:11.084533 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:11.084545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:11.192042 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:11.192076 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:11.218345 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:11.218378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:11.261837 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:11.261869 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:11.321100 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:11.321138 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:11.356360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:11.356390 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:11.433012 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:11.433054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:11.511248 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:11.511270 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:11.511287 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:11.549584 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:11.549614 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:11.596753 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:11.596786 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.626208 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:11.626240 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.173611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:14.187629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:14.187704 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:14.223146 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.223170 1225677 cri.go:89] found id: ""
	I1217 01:32:14.223179 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:14.223264 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.227607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:14.227721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:14.255753 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:14.255791 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.255796 1225677 cri.go:89] found id: ""
	I1217 01:32:14.255804 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:14.255881 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.259963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.263644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:14.263717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:14.290575 1225677 cri.go:89] found id: ""
	I1217 01:32:14.290599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.290614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:14.290621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:14.290681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:14.318287 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.318309 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.318314 1225677 cri.go:89] found id: ""
	I1217 01:32:14.318323 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:14.318378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.322352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.326073 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:14.326157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:14.352179 1225677 cri.go:89] found id: ""
	I1217 01:32:14.352205 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.352214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:14.352221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:14.352304 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:14.380539 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.380565 1225677 cri.go:89] found id: ""
	I1217 01:32:14.380582 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:14.380678 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.385134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:14.385210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:14.417374 1225677 cri.go:89] found id: ""
	I1217 01:32:14.417407 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.417417 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:14.417441 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:14.417457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.464173 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:14.464209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.491958 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:14.492035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.547112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:14.547180 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:14.617502 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:14.617525 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:14.617548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.645669 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:14.645697 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.705027 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:14.705070 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.738615 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:14.738689 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:14.819881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:14.819961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:14.917702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:14.917739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:14.940092 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:14.940127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.482077 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:17.493126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:17.493227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:17.520116 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.520137 1225677 cri.go:89] found id: ""
	I1217 01:32:17.520155 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:17.520234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.524492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:17.524572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:17.553355 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.553419 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:17.553439 1225677 cri.go:89] found id: ""
	I1217 01:32:17.553454 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:17.553512 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.557145 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.560580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:17.560663 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:17.586798 1225677 cri.go:89] found id: ""
	I1217 01:32:17.586824 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.586843 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:17.586850 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:17.586915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:17.614063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.614096 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:17.614102 1225677 cri.go:89] found id: ""
	I1217 01:32:17.614110 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:17.614174 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.618083 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.621593 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:17.621662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:17.652917 1225677 cri.go:89] found id: ""
	I1217 01:32:17.652943 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.652964 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:17.652972 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:17.653029 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:17.679412 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.679435 1225677 cri.go:89] found id: ""
	I1217 01:32:17.679443 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:17.679508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.683530 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:17.683606 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:17.714591 1225677 cri.go:89] found id: ""
	I1217 01:32:17.714618 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.714628 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:17.714638 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:17.714652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.774158 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:17.774193 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.802731 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:17.802759 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:17.837385 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:17.837413 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:17.948723 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:17.948766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:17.967594 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:17.967622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.997257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:17.997350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:18.046163 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:18.046204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:18.075264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:18.075345 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:18.179955 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:18.180007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:18.261983 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:18.262017 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:18.262034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.814850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:20.826637 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:20.826710 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:20.867818 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:20.867839 1225677 cri.go:89] found id: ""
	I1217 01:32:20.867847 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:20.867902 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.871814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:20.871895 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:20.902722 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.902742 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:20.902746 1225677 cri.go:89] found id: ""
	I1217 01:32:20.902755 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:20.902808 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.907236 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.911156 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:20.911230 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:20.937933 1225677 cri.go:89] found id: ""
	I1217 01:32:20.937959 1225677 logs.go:282] 0 containers: []
	W1217 01:32:20.937968 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:20.937974 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:20.938063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:20.965558 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:20.965581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:20.965587 1225677 cri.go:89] found id: ""
	I1217 01:32:20.965595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:20.965652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.969565 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.973428 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:20.973498 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:21.012487 1225677 cri.go:89] found id: ""
	I1217 01:32:21.012512 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.012521 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:21.012527 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:21.012590 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:21.041411 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.041443 1225677 cri.go:89] found id: ""
	I1217 01:32:21.041455 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:21.041515 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:21.045571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:21.045672 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:21.074982 1225677 cri.go:89] found id: ""
	I1217 01:32:21.075005 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.075014 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:21.075023 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:21.075036 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:21.105151 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:21.105181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.131324 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:21.131398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:21.228426 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:21.228461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:21.285988 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:21.286020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:21.369964 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:21.370005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:21.406263 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:21.406295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:21.425680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:21.425710 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:21.503044 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:21.503067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:21.503083 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:21.533119 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:21.533147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:21.584619 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:21.584652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.145239 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:24.156031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:24.156112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:24.191491 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.191515 1225677 cri.go:89] found id: ""
	I1217 01:32:24.191523 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:24.191579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.196271 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:24.196344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:24.229412 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.229433 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.229437 1225677 cri.go:89] found id: ""
	I1217 01:32:24.229445 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:24.229502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.233353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.237055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:24.237137 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:24.264226 1225677 cri.go:89] found id: ""
	I1217 01:32:24.264252 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.264262 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:24.264268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:24.264330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:24.300946 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.300972 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.300977 1225677 cri.go:89] found id: ""
	I1217 01:32:24.300984 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:24.301038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.304900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.308160 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:24.308277 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:24.334573 1225677 cri.go:89] found id: ""
	I1217 01:32:24.334596 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.334606 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:24.334612 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:24.334670 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:24.367769 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.367791 1225677 cri.go:89] found id: ""
	I1217 01:32:24.367800 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:24.367853 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.371482 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:24.371586 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:24.398071 1225677 cri.go:89] found id: ""
	I1217 01:32:24.398095 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.398104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:24.398112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:24.398124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:24.466998 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:24.467073 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:24.467093 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.494797 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:24.494826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.566818 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:24.566859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.627760 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:24.627797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.657250 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:24.657278 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.683514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:24.683549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:24.703093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:24.703129 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.757376 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:24.757411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:24.839791 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:24.839826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:24.883947 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:24.883978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:27.492559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:27.503372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:27.503445 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:27.541590 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:27.541611 1225677 cri.go:89] found id: ""
	I1217 01:32:27.541620 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:27.541675 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.545373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:27.545448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:27.571462 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:27.571486 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:27.571491 1225677 cri.go:89] found id: ""
	I1217 01:32:27.571499 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:27.571555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.575671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.579240 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:27.579332 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:27.612215 1225677 cri.go:89] found id: ""
	I1217 01:32:27.612245 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.612254 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:27.612261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:27.612339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:27.639672 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:27.639696 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.639701 1225677 cri.go:89] found id: ""
	I1217 01:32:27.639708 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:27.639782 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.643953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.647820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:27.647942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:27.673115 1225677 cri.go:89] found id: ""
	I1217 01:32:27.673141 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.673150 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:27.673157 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:27.673215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:27.703404 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.703428 1225677 cri.go:89] found id: ""
	I1217 01:32:27.703437 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:27.703566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.708031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:27.708106 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:27.736748 1225677 cri.go:89] found id: ""
	I1217 01:32:27.736770 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.736779 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:27.736789 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:27.736802 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.763699 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:27.763727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.790990 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:27.791020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:27.871644 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:27.871680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:27.904392 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:27.904499 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:27.926297 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:27.926333 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:28.002149 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:28.002177 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:28.002196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:28.030901 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:28.030933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:28.070431 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:28.070463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:28.124957 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:28.124994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:28.185427 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:28.185465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:30.787761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:30.798953 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:30.799025 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:30.826532 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:30.826561 1225677 cri.go:89] found id: ""
	I1217 01:32:30.826570 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:30.826631 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.830429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:30.830503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:30.856397 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:30.856449 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:30.856462 1225677 cri.go:89] found id: ""
	I1217 01:32:30.856470 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:30.856524 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.860460 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.864121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:30.864204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:30.893119 1225677 cri.go:89] found id: ""
	I1217 01:32:30.893143 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.893153 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:30.893166 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:30.893225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:30.942371 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:30.942393 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:30.942398 1225677 cri.go:89] found id: ""
	I1217 01:32:30.942406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:30.942463 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.947748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.953053 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:30.953140 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:30.991763 1225677 cri.go:89] found id: ""
	I1217 01:32:30.991793 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.991802 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:30.991817 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:30.991888 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:31.026936 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.026958 1225677 cri.go:89] found id: ""
	I1217 01:32:31.026967 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:31.027022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:31.031253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:31.031338 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:31.060606 1225677 cri.go:89] found id: ""
	I1217 01:32:31.060632 1225677 logs.go:282] 0 containers: []
	W1217 01:32:31.060641 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:31.060650 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:31.060666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.089805 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:31.089837 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:31.179774 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:31.179814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:31.231705 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:31.231739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:31.264982 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:31.265014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:31.295319 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:31.295348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:31.398598 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:31.398635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:31.418439 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:31.418473 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:31.505328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:31.505348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:31.505364 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:31.534574 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:31.534604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:31.584571 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:31.584607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.145660 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:34.156555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:34.156680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:34.189334 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.189353 1225677 cri.go:89] found id: ""
	I1217 01:32:34.189361 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:34.189415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.193025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:34.193117 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:34.229137 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.229160 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.229165 1225677 cri.go:89] found id: ""
	I1217 01:32:34.229176 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:34.229234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.232921 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.236260 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:34.236361 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:34.264990 1225677 cri.go:89] found id: ""
	I1217 01:32:34.265013 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.265022 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:34.265028 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:34.265086 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:34.292130 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.292205 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.292225 1225677 cri.go:89] found id: ""
	I1217 01:32:34.292250 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:34.292344 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.295987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.299388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:34.299500 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:34.325943 1225677 cri.go:89] found id: ""
	I1217 01:32:34.326026 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.326042 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:34.326049 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:34.326108 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:34.363328 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.363351 1225677 cri.go:89] found id: ""
	I1217 01:32:34.363361 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:34.363415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.367803 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:34.367878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:34.394984 1225677 cri.go:89] found id: ""
	I1217 01:32:34.395011 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.395020 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:34.395029 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:34.395065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:34.470015 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:34.470036 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:34.470049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.496057 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:34.496091 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.549522 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:34.549555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.592693 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:34.592728 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.652425 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:34.652505 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.680716 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:34.680747 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.707492 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:34.707522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:34.787410 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:34.787492 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:34.892246 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:34.892284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:34.910499 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:34.910530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:37.463203 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:37.474127 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:37.474200 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:37.506946 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.507018 1225677 cri.go:89] found id: ""
	I1217 01:32:37.507042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:37.507123 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.511460 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:37.511535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:37.546992 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:37.547014 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:37.547020 1225677 cri.go:89] found id: ""
	I1217 01:32:37.547028 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:37.547090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.550864 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.554364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:37.554450 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:37.592224 1225677 cri.go:89] found id: ""
	I1217 01:32:37.592353 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.592394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:37.592437 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:37.592579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:37.620557 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.620581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:37.620587 1225677 cri.go:89] found id: ""
	I1217 01:32:37.620595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:37.620691 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.624719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.628465 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:37.628541 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:37.657843 1225677 cri.go:89] found id: ""
	I1217 01:32:37.657870 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.657878 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:37.657885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:37.657955 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:37.686792 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.686825 1225677 cri.go:89] found id: ""
	I1217 01:32:37.686834 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:37.686898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.690651 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:37.690783 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:37.719977 1225677 cri.go:89] found id: ""
	I1217 01:32:37.720000 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.720009 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:37.720018 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:37.720030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:37.738580 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:37.738610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:37.814847 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:37.814869 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:37.814883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.840694 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:37.840723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.901817 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:37.901855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.935757 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:37.935839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:38.014642 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:38.014679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:38.115079 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:38.115123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:38.157390 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:38.157423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:38.204086 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:38.204123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:38.235323 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:38.235355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:40.766175 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:40.777746 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:40.777818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:40.809026 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:40.809051 1225677 cri.go:89] found id: ""
	I1217 01:32:40.809060 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:40.809157 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.813212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:40.813294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:40.840793 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:40.840821 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:40.840826 1225677 cri.go:89] found id: ""
	I1217 01:32:40.840834 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:40.840915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.845018 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.848655 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:40.848732 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:40.875726 1225677 cri.go:89] found id: ""
	I1217 01:32:40.875750 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.875761 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:40.875767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:40.875825 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:40.902504 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:40.902527 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:40.902532 1225677 cri.go:89] found id: ""
	I1217 01:32:40.902540 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:40.902593 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.906394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.910259 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:40.910330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:40.936570 1225677 cri.go:89] found id: ""
	I1217 01:32:40.936599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.936609 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:40.936616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:40.936676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:40.964358 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:40.964381 1225677 cri.go:89] found id: ""
	I1217 01:32:40.964389 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:40.964541 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.968221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:40.968292 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:40.998606 1225677 cri.go:89] found id: ""
	I1217 01:32:40.998633 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.998644 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:40.998654 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:40.998668 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:41.022520 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:41.022551 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:41.051598 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:41.051625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:41.091115 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:41.091148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:41.159179 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:41.159223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:41.190970 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:41.190997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:41.225786 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:41.225815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:41.294484 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:41.294509 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:41.294523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:41.346979 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:41.347017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:41.374095 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:41.374126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:41.456622 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:41.456658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.066375 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:44.077293 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:44.077365 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:44.104332 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.104476 1225677 cri.go:89] found id: ""
	I1217 01:32:44.104504 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:44.104580 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.108715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:44.108799 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:44.140649 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.140672 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.140677 1225677 cri.go:89] found id: ""
	I1217 01:32:44.140684 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:44.140763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.144834 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.148730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:44.148811 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:44.197233 1225677 cri.go:89] found id: ""
	I1217 01:32:44.197259 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.197268 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:44.197274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:44.197350 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:44.240339 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:44.240363 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.240368 1225677 cri.go:89] found id: ""
	I1217 01:32:44.240376 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:44.240456 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.244962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.248793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:44.248913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:44.278464 1225677 cri.go:89] found id: ""
	I1217 01:32:44.278491 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.278501 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:44.278507 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:44.278585 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:44.308914 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.308938 1225677 cri.go:89] found id: ""
	I1217 01:32:44.308958 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:44.309048 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.313878 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:44.313951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:44.344530 1225677 cri.go:89] found id: ""
	I1217 01:32:44.344555 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.344577 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:44.344588 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:44.344600 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.372833 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:44.372864 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:44.452952 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:44.452990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:44.474609 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:44.474642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:44.552482 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:44.552507 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:44.552521 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.580322 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:44.580352 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.610292 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:44.610320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:44.643236 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:44.643266 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.755542 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:44.755601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.808715 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:44.808771 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.856301 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:44.856338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.419847 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:47.431877 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:47.431951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:47.461659 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:47.461682 1225677 cri.go:89] found id: ""
	I1217 01:32:47.461690 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:47.461747 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.465698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:47.465822 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:47.495157 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.495179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.495184 1225677 cri.go:89] found id: ""
	I1217 01:32:47.495192 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:47.495247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.499337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.503995 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:47.504080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:47.543135 1225677 cri.go:89] found id: ""
	I1217 01:32:47.543158 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.543167 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:47.543174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:47.543238 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:47.572765 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.572791 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:47.572797 1225677 cri.go:89] found id: ""
	I1217 01:32:47.572804 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:47.572867 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.577796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.581659 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:47.581760 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:47.612595 1225677 cri.go:89] found id: ""
	I1217 01:32:47.612660 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.612674 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:47.612681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:47.612744 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:47.642199 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:47.642223 1225677 cri.go:89] found id: ""
	I1217 01:32:47.642231 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:47.642287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.646215 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:47.646285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:47.672805 1225677 cri.go:89] found id: ""
	I1217 01:32:47.672830 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.672839 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:47.672849 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:47.672859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:47.702885 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:47.702917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:47.723284 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:47.723318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:47.799644 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:47.799674 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:47.799688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.839852 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:47.839884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.888519 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:47.888557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:47.973305 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:47.973344 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:48.081814 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:48.081853 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:48.114561 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:48.114590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:48.208193 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:48.208234 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:48.241262 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:48.241293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.770940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:50.781882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:50.781951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:50.809569 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:50.809595 1225677 cri.go:89] found id: ""
	I1217 01:32:50.809604 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:50.809665 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.814519 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:50.814594 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:50.849443 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:50.849472 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:50.849478 1225677 cri.go:89] found id: ""
	I1217 01:32:50.849486 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:50.849564 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.853510 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.857119 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:50.857224 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:50.888246 1225677 cri.go:89] found id: ""
	I1217 01:32:50.888275 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.888284 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:50.888291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:50.888351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:50.916294 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:50.916320 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:50.916326 1225677 cri.go:89] found id: ""
	I1217 01:32:50.916333 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:50.916388 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.920299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.924658 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:50.924730 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:50.957966 1225677 cri.go:89] found id: ""
	I1217 01:32:50.957994 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.958003 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:50.958009 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:50.958069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:50.991282 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.991304 1225677 cri.go:89] found id: ""
	I1217 01:32:50.991312 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:50.991377 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.995730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:50.995797 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:51.034122 1225677 cri.go:89] found id: ""
	I1217 01:32:51.034199 1225677 logs.go:282] 0 containers: []
	W1217 01:32:51.034238 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:51.034266 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:51.034295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:51.062022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:51.062100 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:51.081698 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:51.081733 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:51.112382 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:51.112482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:51.172152 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:51.172190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:51.213603 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:51.213634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:51.297400 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:51.297439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:51.331335 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:51.331412 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:51.426253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:51.426289 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:51.499310 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:51.499332 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:51.499348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:51.572760 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:51.572795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.122214 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:54.133644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:54.133721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:54.162887 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.162912 1225677 cri.go:89] found id: ""
	I1217 01:32:54.162922 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:54.162978 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.167057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:54.167127 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:54.205900 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.205920 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.205925 1225677 cri.go:89] found id: ""
	I1217 01:32:54.205932 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:54.205987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.210350 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.214343 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:54.214419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:54.246321 1225677 cri.go:89] found id: ""
	I1217 01:32:54.246348 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.246357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:54.246364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:54.246424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:54.276281 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.276305 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.276310 1225677 cri.go:89] found id: ""
	I1217 01:32:54.276319 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:54.276379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.281009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.285204 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:54.285281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:54.311149 1225677 cri.go:89] found id: ""
	I1217 01:32:54.311225 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.311251 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:54.311268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:54.311342 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:54.339737 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.339763 1225677 cri.go:89] found id: ""
	I1217 01:32:54.339771 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:54.339825 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.343615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:54.343749 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:54.370945 1225677 cri.go:89] found id: ""
	I1217 01:32:54.370971 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.370981 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:54.370991 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:54.371003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:54.390464 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:54.390495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:54.470328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:54.470363 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:54.470377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.495970 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:54.495999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.557300 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:54.557336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.585791 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:54.585821 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.612126 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:54.612152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:54.653218 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:54.653246 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:54.752385 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:54.752432 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.814139 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:54.814175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.885191 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:54.885226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:57.468539 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:57.479841 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:57.479913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:57.511032 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.511058 1225677 cri.go:89] found id: ""
	I1217 01:32:57.511067 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:57.511130 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.515373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:57.515446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:57.558508 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.558531 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:57.558537 1225677 cri.go:89] found id: ""
	I1217 01:32:57.558550 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:57.558622 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.563150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.567245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:57.567322 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:57.594294 1225677 cri.go:89] found id: ""
	I1217 01:32:57.594330 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.594341 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:57.594347 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:57.594411 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:57.626077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:57.626100 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.626106 1225677 cri.go:89] found id: ""
	I1217 01:32:57.626114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:57.626173 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.630289 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.634055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:57.634130 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:57.661683 1225677 cri.go:89] found id: ""
	I1217 01:32:57.661711 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.661721 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:57.661727 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:57.661785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:57.690521 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.690556 1225677 cri.go:89] found id: ""
	I1217 01:32:57.690565 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:57.690632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.694587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:57.694687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:57.721760 1225677 cri.go:89] found id: ""
	I1217 01:32:57.721783 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.721792 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:57.721801 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:57.721830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.749279 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:57.749308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.781988 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:57.782017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:57.820059 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:57.820089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:57.841084 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:57.841121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.884653 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:57.884752 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.932570 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:57.932605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:58.015607 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:58.015649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:58.116442 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:58.116479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:58.205896 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:58.205921 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:58.205934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:58.252524 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:58.252595 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.831933 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:00.843915 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:00.844011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:00.872994 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:00.873018 1225677 cri.go:89] found id: ""
	I1217 01:33:00.873027 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:00.873080 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.876819 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:00.876914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:00.904306 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:00.904329 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:00.904334 1225677 cri.go:89] found id: ""
	I1217 01:33:00.904342 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:00.904397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.908029 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.911563 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:00.911642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:00.940652 1225677 cri.go:89] found id: ""
	I1217 01:33:00.940678 1225677 logs.go:282] 0 containers: []
	W1217 01:33:00.940687 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:00.940694 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:00.940752 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:00.967462 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.967503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:00.967514 1225677 cri.go:89] found id: ""
	I1217 01:33:00.967522 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:00.967601 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.971689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.976107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:00.976187 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:01.015150 1225677 cri.go:89] found id: ""
	I1217 01:33:01.015230 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.015253 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:01.015273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:01.015366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:01.044488 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.044553 1225677 cri.go:89] found id: ""
	I1217 01:33:01.044578 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:01.044671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:01.048372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:01.048523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:01.083014 1225677 cri.go:89] found id: ""
	I1217 01:33:01.083096 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.083121 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:01.083173 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:01.083208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:01.181547 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:01.181588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:01.202930 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:01.202966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:01.255543 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:01.255580 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:01.282899 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:01.282927 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.310357 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:01.310387 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:01.361428 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:01.361458 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:01.439491 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:01.439564 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:01.439594 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:01.466548 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:01.466575 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:01.524293 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:01.524332 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:01.603276 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:01.603314 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.194004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:04.206859 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:04.206931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:04.245597 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.245621 1225677 cri.go:89] found id: ""
	I1217 01:33:04.245630 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:04.245688 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.249418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:04.249489 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:04.278257 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.278277 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.278284 1225677 cri.go:89] found id: ""
	I1217 01:33:04.278291 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:04.278405 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.282613 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.286801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:04.286878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:04.313756 1225677 cri.go:89] found id: ""
	I1217 01:33:04.313825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.313852 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:04.313866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:04.313946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:04.343505 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.343528 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.343533 1225677 cri.go:89] found id: ""
	I1217 01:33:04.343542 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:04.343595 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.347432 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.351245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:04.351318 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:04.378415 1225677 cri.go:89] found id: ""
	I1217 01:33:04.378443 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.378453 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:04.378461 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:04.378523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:04.404603 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.404635 1225677 cri.go:89] found id: ""
	I1217 01:33:04.404645 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:04.404699 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.408372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:04.408490 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:04.435025 1225677 cri.go:89] found id: ""
	I1217 01:33:04.435053 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.435063 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:04.435072 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:04.435084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:04.453398 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:04.453431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:04.532185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:04.532207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:04.532220 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.565093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:04.565122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.608097 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:04.608141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.669592 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:04.669635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.698199 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:04.698230 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.781891 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:04.781933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:04.889443 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:04.889483 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.935503 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:04.935540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.962255 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:04.962288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.497519 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:07.509544 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:07.509619 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:07.541912 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.541930 1225677 cri.go:89] found id: ""
	I1217 01:33:07.541938 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:07.541998 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.545880 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:07.545967 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:07.576061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.576085 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:07.576090 1225677 cri.go:89] found id: ""
	I1217 01:33:07.576098 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:07.576156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.580118 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.584118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:07.584216 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:07.613260 1225677 cri.go:89] found id: ""
	I1217 01:33:07.613288 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.613297 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:07.613304 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:07.613390 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:07.643089 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:07.643113 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:07.643118 1225677 cri.go:89] found id: ""
	I1217 01:33:07.643126 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:07.643181 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.646892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.650360 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:07.650433 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:07.677367 1225677 cri.go:89] found id: ""
	I1217 01:33:07.677393 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.677403 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:07.677409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:07.677515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:07.705475 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.705499 1225677 cri.go:89] found id: ""
	I1217 01:33:07.705508 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:07.705588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.709429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:07.709538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:07.737814 1225677 cri.go:89] found id: ""
	I1217 01:33:07.737838 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.737846 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:07.737855 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:07.737867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.767138 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:07.767166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.800084 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:07.800165 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:07.820093 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:07.820124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:07.887706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:07.887729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:07.887744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.915091 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:07.915122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.956054 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:07.956116 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:08.019066 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:08.019105 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:08.080377 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:08.080423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:08.124710 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:08.124793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:08.214495 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:08.214593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:10.827104 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:10.838284 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:10.838422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:10.874165 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:10.874184 1225677 cri.go:89] found id: ""
	I1217 01:33:10.874192 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:10.874245 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.878108 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:10.878180 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:10.903766 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:10.903789 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:10.903794 1225677 cri.go:89] found id: ""
	I1217 01:33:10.903802 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:10.903857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.907574 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.911142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:10.911214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:10.938246 1225677 cri.go:89] found id: ""
	I1217 01:33:10.938273 1225677 logs.go:282] 0 containers: []
	W1217 01:33:10.938283 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:10.938289 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:10.938347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:10.964843 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:10.964866 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:10.964871 1225677 cri.go:89] found id: ""
	I1217 01:33:10.964879 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:10.964935 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.968730 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.972392 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:10.972503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:11.008562 1225677 cri.go:89] found id: ""
	I1217 01:33:11.008590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.008600 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:11.008607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:11.008716 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:11.041307 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.041342 1225677 cri.go:89] found id: ""
	I1217 01:33:11.041352 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:11.041408 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:11.045319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:11.045394 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:11.072727 1225677 cri.go:89] found id: ""
	I1217 01:33:11.072757 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.072771 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:11.072781 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:11.072793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:11.092411 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:11.092531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:11.173959 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:11.173986 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:11.174000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:11.204098 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:11.204130 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:11.265126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:11.265169 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:11.329309 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:11.329350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:11.366487 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:11.366516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:11.449439 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:11.449474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:11.493614 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:11.493648 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.530111 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:11.530142 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:11.573692 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:11.573724 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.175120 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:14.187102 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:14.187212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:14.217900 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.217923 1225677 cri.go:89] found id: ""
	I1217 01:33:14.217933 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:14.217993 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.228556 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:14.228632 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:14.256615 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.256694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.256731 1225677 cri.go:89] found id: ""
	I1217 01:33:14.256747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:14.256855 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.260873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.264886 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:14.264982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:14.293944 1225677 cri.go:89] found id: ""
	I1217 01:33:14.294012 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.294036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:14.294057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:14.294149 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:14.322566 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.322586 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.322591 1225677 cri.go:89] found id: ""
	I1217 01:33:14.322599 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:14.322693 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.326575 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.330162 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:14.330237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:14.356466 1225677 cri.go:89] found id: ""
	I1217 01:33:14.356491 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.356500 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:14.356506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:14.356566 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:14.386031 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:14.386055 1225677 cri.go:89] found id: ""
	I1217 01:33:14.386064 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:14.386142 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.390030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:14.390110 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:14.416257 1225677 cri.go:89] found id: ""
	I1217 01:33:14.416284 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.416293 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:14.416303 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:14.416317 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.511192 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:14.511232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:14.604109 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:14.604132 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:14.604148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.656861 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:14.656895 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.685614 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:14.685642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:14.764169 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:14.764208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:14.812699 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:14.812730 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:14.831513 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:14.831547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.858309 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:14.858339 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.909041 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:14.909072 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.975681 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:14.975723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.515279 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:17.540730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:17.540806 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:17.570081 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:17.570102 1225677 cri.go:89] found id: ""
	I1217 01:33:17.570110 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:17.570178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.574399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:17.574471 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:17.599589 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:17.599610 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:17.599614 1225677 cri.go:89] found id: ""
	I1217 01:33:17.599622 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:17.599689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.604570 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.608574 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:17.608645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:17.635229 1225677 cri.go:89] found id: ""
	I1217 01:33:17.635306 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.635329 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:17.635350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:17.635422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:17.668964 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:17.669003 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.669009 1225677 cri.go:89] found id: ""
	I1217 01:33:17.669017 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:17.669103 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.673057 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.677753 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:17.677826 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:17.707206 1225677 cri.go:89] found id: ""
	I1217 01:33:17.707245 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.707255 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:17.707261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:17.707325 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:17.740289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.740313 1225677 cri.go:89] found id: ""
	I1217 01:33:17.740322 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:17.740385 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.744409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:17.744515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:17.771770 1225677 cri.go:89] found id: ""
	I1217 01:33:17.771797 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.771806 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:17.771815 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:17.771828 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.800155 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:17.800190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:17.882443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:17.882481 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:17.935750 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:17.935781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:17.954392 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:17.954425 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:18.031535 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:18.031568 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:18.031585 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:18.079987 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:18.080029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:18.108390 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:18.108454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:18.206148 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:18.206190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:18.238865 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:18.238894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:18.280200 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:18.280236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.844541 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:20.855183 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:20.855255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:20.883645 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:20.883666 1225677 cri.go:89] found id: ""
	I1217 01:33:20.883673 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:20.883731 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.888021 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:20.888094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:20.917299 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:20.917325 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:20.917330 1225677 cri.go:89] found id: ""
	I1217 01:33:20.917338 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:20.917397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.921256 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.925997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:20.926069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:20.952872 1225677 cri.go:89] found id: ""
	I1217 01:33:20.952898 1225677 logs.go:282] 0 containers: []
	W1217 01:33:20.952907 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:20.952913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:20.952970 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:20.979961 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.979983 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:20.979989 1225677 cri.go:89] found id: ""
	I1217 01:33:20.979998 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:20.980064 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.984302 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.989098 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:20.989171 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:21.023299 1225677 cri.go:89] found id: ""
	I1217 01:33:21.023365 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.023382 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:21.023390 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:21.023454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:21.052742 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.052763 1225677 cri.go:89] found id: ""
	I1217 01:33:21.052773 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:21.052830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:21.056774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:21.056847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:21.086360 1225677 cri.go:89] found id: ""
	I1217 01:33:21.086382 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.086391 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:21.086399 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:21.086411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:21.114471 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:21.114500 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:21.213416 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:21.213451 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:21.294188 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:21.294212 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:21.294253 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:21.321989 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:21.322022 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:21.361898 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:21.361940 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:21.415113 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:21.415151 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.443169 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:21.443202 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:21.538356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:21.538403 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:21.584226 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:21.584255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:21.602588 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:21.602625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.196991 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:24.207442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:24.207518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:24.243683 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.243708 1225677 cri.go:89] found id: ""
	I1217 01:33:24.243717 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:24.243772 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.247370 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:24.247444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:24.274124 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.274153 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.274159 1225677 cri.go:89] found id: ""
	I1217 01:33:24.274167 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:24.274224 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.277936 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.281546 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:24.281628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:24.310864 1225677 cri.go:89] found id: ""
	I1217 01:33:24.310893 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.310903 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:24.310910 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:24.310968 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:24.342620 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.342643 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.342648 1225677 cri.go:89] found id: ""
	I1217 01:33:24.342656 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:24.342714 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.346873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.350690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:24.350776 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:24.378447 1225677 cri.go:89] found id: ""
	I1217 01:33:24.378476 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.378486 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:24.378510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:24.378592 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:24.410097 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.410122 1225677 cri.go:89] found id: ""
	I1217 01:33:24.410132 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:24.410193 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.414020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:24.414094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:24.440741 1225677 cri.go:89] found id: ""
	I1217 01:33:24.440825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.440851 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:24.440879 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:24.440912 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:24.460132 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:24.460163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.493812 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:24.493842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.536741 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:24.536777 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.597219 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:24.597260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.663765 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:24.663805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.703808 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:24.703840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:24.784250 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:24.784288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:24.883741 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:24.883779 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:24.962818 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:24.962842 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:24.962856 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.994828 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:24.994858 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:27.546732 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:27.564740 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:27.564805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:27.608525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:27.608549 1225677 cri.go:89] found id: ""
	I1217 01:33:27.608558 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:27.608611 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.613062 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:27.613135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:27.659805 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:27.659827 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:27.659831 1225677 cri.go:89] found id: ""
	I1217 01:33:27.659838 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:27.659896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.664210 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.668351 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:27.668446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:27.704696 1225677 cri.go:89] found id: ""
	I1217 01:33:27.704771 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.704794 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:27.704815 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:27.704898 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:27.738798 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:27.738821 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:27.738827 1225677 cri.go:89] found id: ""
	I1217 01:33:27.738834 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:27.738896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.743026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.746985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:27.747059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:27.785087 1225677 cri.go:89] found id: ""
	I1217 01:33:27.785111 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.785119 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:27.785126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:27.785192 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:27.818270 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:27.818289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:27.818293 1225677 cri.go:89] found id: ""
	I1217 01:33:27.818300 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:27.818356 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.822652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.826638 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:27.826695 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:27.865573 1225677 cri.go:89] found id: ""
	I1217 01:33:27.865604 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.865613 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:27.865623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:27.865634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:27.972193 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:27.972232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:28.056562 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:28.056589 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:28.056605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:28.085398 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:28.085429 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:28.132214 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:28.132252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:28.174271 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:28.174303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:28.273045 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:28.273082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:28.321799 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:28.321880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:28.342146 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:28.342292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:28.406933 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:28.407120 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:28.498600 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:28.498680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:28.534124 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:28.534150 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.091052 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:31.103205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:31.103279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:31.140533 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.140556 1225677 cri.go:89] found id: ""
	I1217 01:33:31.140564 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:31.140627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.145121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:31.145202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:31.175735 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.175761 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.175768 1225677 cri.go:89] found id: ""
	I1217 01:33:31.175775 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:31.175832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.180026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.184555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:31.184628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:31.213074 1225677 cri.go:89] found id: ""
	I1217 01:33:31.213100 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.213110 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:31.213117 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:31.213174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:31.251260 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.251286 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.251291 1225677 cri.go:89] found id: ""
	I1217 01:33:31.251299 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:31.251354 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.255625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.259649 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:31.259726 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:31.287030 1225677 cri.go:89] found id: ""
	I1217 01:33:31.287056 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.287065 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:31.287072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:31.287128 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:31.314782 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.314851 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.314876 1225677 cri.go:89] found id: ""
	I1217 01:33:31.314902 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:31.314984 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.320071 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.324354 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:31.324534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:31.357412 1225677 cri.go:89] found id: ""
	I1217 01:33:31.357439 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.357449 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:31.357464 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:31.357480 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:31.462967 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:31.463006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:31.482965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:31.482995 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:31.552928 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:31.552952 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:31.552966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.579435 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:31.579470 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.619907 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:31.619945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.687595 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:31.687636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.720143 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:31.720175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.746106 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:31.746135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.812096 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:31.812131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.841610 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:31.841646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:31.920159 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:31.920197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:34.457713 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:34.469492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:34.469574 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:34.497755 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:34.497777 1225677 cri.go:89] found id: ""
	I1217 01:33:34.497786 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:34.497850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.501620 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:34.501703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:34.532206 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:34.532227 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:34.532231 1225677 cri.go:89] found id: ""
	I1217 01:33:34.532238 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:34.532299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.537376 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.541069 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:34.541142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:34.577690 1225677 cri.go:89] found id: ""
	I1217 01:33:34.577730 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.577740 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:34.577763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:34.577844 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:34.606156 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.606176 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:34.606180 1225677 cri.go:89] found id: ""
	I1217 01:33:34.606188 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:34.606243 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.610716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.614894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:34.614990 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:34.644563 1225677 cri.go:89] found id: ""
	I1217 01:33:34.644590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.644599 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:34.644605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:34.644685 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:34.673641 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:34.673666 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:34.673671 1225677 cri.go:89] found id: ""
	I1217 01:33:34.673679 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:34.673737 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.677531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.681295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:34.681370 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:34.708990 1225677 cri.go:89] found id: ""
	I1217 01:33:34.709071 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.709088 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:34.709099 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:34.709111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:34.809701 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:34.809785 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:34.828178 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:34.828210 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:34.903131 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:34.903155 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:34.903168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.971266 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:34.971304 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:35.004179 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:35.004215 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:35.041784 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:35.041815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:35.067541 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:35.067571 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:35.126841 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:35.126874 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:35.172191 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:35.172226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:35.200255 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:35.200295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:35.239991 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:35.240030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:37.824762 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:37.835623 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:37.835693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:37.865989 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:37.866008 1225677 cri.go:89] found id: ""
	I1217 01:33:37.866018 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:37.866073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.869857 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:37.869946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:37.898865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:37.898940 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:37.898960 1225677 cri.go:89] found id: ""
	I1217 01:33:37.898986 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:37.899093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.903232 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.907211 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:37.907281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:37.939280 1225677 cri.go:89] found id: ""
	I1217 01:33:37.939302 1225677 logs.go:282] 0 containers: []
	W1217 01:33:37.939311 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:37.939318 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:37.939379 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:37.967924 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:37.967945 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:37.967949 1225677 cri.go:89] found id: ""
	I1217 01:33:37.967957 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:37.968032 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.971797 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.975432 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:37.975510 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:38.007766 1225677 cri.go:89] found id: ""
	I1217 01:33:38.007790 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.007798 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:38.007805 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:38.007864 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:38.037473 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.037495 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.037503 1225677 cri.go:89] found id: ""
	I1217 01:33:38.037511 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:38.037566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.041569 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.045417 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:38.045524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:38.073829 1225677 cri.go:89] found id: ""
	I1217 01:33:38.073851 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.073860 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:38.073870 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:38.073882 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:38.093728 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:38.093764 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:38.176670 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:38.176690 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:38.176703 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:38.211414 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:38.211443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:38.263725 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:38.263761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:38.309151 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:38.309186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:38.338107 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:38.338143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.369538 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:38.369566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:38.449918 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:38.449954 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:38.542249 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:38.542288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:38.612539 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:38.612617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.642932 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:38.643015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:41.175028 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:41.186849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:41.186921 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:41.230880 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.230955 1225677 cri.go:89] found id: ""
	I1217 01:33:41.230992 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:41.231084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.235480 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:41.235641 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:41.266906 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.266980 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.267014 1225677 cri.go:89] found id: ""
	I1217 01:33:41.267040 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:41.267127 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.271136 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.275105 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:41.275225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:41.306499 1225677 cri.go:89] found id: ""
	I1217 01:33:41.306580 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.306603 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:41.306624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:41.306737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:41.333549 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.333575 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.333580 1225677 cri.go:89] found id: ""
	I1217 01:33:41.333589 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:41.333643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.337497 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.341450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:41.341531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:41.368976 1225677 cri.go:89] found id: ""
	I1217 01:33:41.369004 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.369014 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:41.369020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:41.369082 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:41.397520 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.397583 1225677 cri.go:89] found id: ""
	I1217 01:33:41.397607 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:41.397684 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.401528 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:41.401607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:41.427395 1225677 cri.go:89] found id: ""
	I1217 01:33:41.427423 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.427434 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:41.427444 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:41.427463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:41.525514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:41.525559 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:41.551264 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:41.551299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:41.625083 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:41.625123 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:41.625147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.702454 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:41.702490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.735107 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:41.735134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.769228 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:41.769269 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.799696 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:41.799725 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.848171 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:41.848207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.933395 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:41.933446 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:42.025408 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:42.025452 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:44.562646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:44.573393 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:44.573486 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:44.600868 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:44.600895 1225677 cri.go:89] found id: ""
	I1217 01:33:44.600906 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:44.600983 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.604710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:44.604780 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:44.632082 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:44.632158 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:44.632187 1225677 cri.go:89] found id: ""
	I1217 01:33:44.632208 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:44.632294 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.636315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.640212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:44.640285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:44.669382 1225677 cri.go:89] found id: ""
	I1217 01:33:44.669404 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.669413 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:44.669419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:44.669480 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:44.699713 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:44.699732 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.699737 1225677 cri.go:89] found id: ""
	I1217 01:33:44.699747 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:44.699801 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.703608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.707118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:44.707191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:44.733881 1225677 cri.go:89] found id: ""
	I1217 01:33:44.733905 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.733914 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:44.733921 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:44.733983 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:44.761418 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:44.761440 1225677 cri.go:89] found id: ""
	I1217 01:33:44.761449 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:44.761507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.765368 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:44.765451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:44.797562 1225677 cri.go:89] found id: ""
	I1217 01:33:44.797587 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.797595 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:44.797605 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:44.797617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.824683 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:44.824716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:44.935133 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:44.935177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:44.954652 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:44.954684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:45.015678 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:45.015775 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:45.189553 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:45.191524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:45.273264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:45.273306 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:45.371974 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:45.372013 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:45.409119 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:45.409149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:45.483606 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:45.483631 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:45.483645 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:45.511796 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:45.511826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.069605 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:48.081402 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:48.081501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:48.113467 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.113487 1225677 cri.go:89] found id: ""
	I1217 01:33:48.113496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:48.113554 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.123702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:48.123830 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:48.152225 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.152299 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.152320 1225677 cri.go:89] found id: ""
	I1217 01:33:48.152346 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:48.152452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.156596 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.160848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:48.160930 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:48.192903 1225677 cri.go:89] found id: ""
	I1217 01:33:48.192934 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.192944 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:48.192951 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:48.193016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:48.223459 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.223483 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.223489 1225677 cri.go:89] found id: ""
	I1217 01:33:48.223496 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:48.223577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.228708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.233033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:48.233131 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:48.264313 1225677 cri.go:89] found id: ""
	I1217 01:33:48.264339 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.264348 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:48.264355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:48.264430 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:48.292891 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.292963 1225677 cri.go:89] found id: ""
	I1217 01:33:48.292986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:48.293068 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.297013 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:48.297089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:48.324697 1225677 cri.go:89] found id: ""
	I1217 01:33:48.324724 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.324734 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:48.324743 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:48.324755 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:48.343285 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:48.343318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.401079 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:48.401121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.445651 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:48.445685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.487906 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:48.487936 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.520261 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:48.520288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:48.612095 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:48.612132 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:48.686505 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:48.686528 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:48.686545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.715518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:48.715549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.780723 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:48.780758 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:48.813883 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:48.813910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.424534 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:51.435019 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:51.435089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:51.461515 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.461539 1225677 cri.go:89] found id: ""
	I1217 01:33:51.461549 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:51.461610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.465697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:51.465778 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:51.494232 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.494254 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:51.494260 1225677 cri.go:89] found id: ""
	I1217 01:33:51.494267 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:51.494342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.498178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.501847 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:51.501920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:51.533242 1225677 cri.go:89] found id: ""
	I1217 01:33:51.533267 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.533277 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:51.533283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:51.533356 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:51.559915 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.559937 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:51.559942 1225677 cri.go:89] found id: ""
	I1217 01:33:51.559950 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:51.560017 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.563739 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.567426 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:51.567506 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:51.598933 1225677 cri.go:89] found id: ""
	I1217 01:33:51.598958 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.598978 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:51.598985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:51.599043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:51.628013 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:51.628085 1225677 cri.go:89] found id: ""
	I1217 01:33:51.628107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:51.628195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.632081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:51.632153 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:51.664059 1225677 cri.go:89] found id: ""
	I1217 01:33:51.664095 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.664104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:51.664114 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:51.664127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.703117 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:51.703141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.746864 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:51.746901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.813259 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:51.813294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:51.890408 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:51.890448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.996243 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:51.996281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:52.078355 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:52.078385 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:52.078399 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:52.124157 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:52.124201 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:52.158325 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:52.158406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:52.194882 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:52.194917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:52.236180 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:52.236223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:54.755766 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:54.766584 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:54.766659 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:54.794813 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:54.794834 1225677 cri.go:89] found id: ""
	I1217 01:33:54.794844 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:54.794900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.798697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:54.798816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:54.830345 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:54.830368 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:54.830374 1225677 cri.go:89] found id: ""
	I1217 01:33:54.830381 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:54.830437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.834212 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.837869 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:54.837958 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:54.865687 1225677 cri.go:89] found id: ""
	I1217 01:33:54.865710 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.865720 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:54.865726 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:54.865784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:54.893199 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:54.893222 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:54.893228 1225677 cri.go:89] found id: ""
	I1217 01:33:54.893236 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:54.893300 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.897296 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.901035 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:54.901109 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:54.935123 1225677 cri.go:89] found id: ""
	I1217 01:33:54.935150 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.935160 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:54.935165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:54.935227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:54.960828 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:54.960908 1225677 cri.go:89] found id: ""
	I1217 01:33:54.960925 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:54.960994 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.965788 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:54.965858 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:54.996816 1225677 cri.go:89] found id: ""
	I1217 01:33:54.996844 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.996854 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:54.996864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:54.996877 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:55.049187 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:55.049226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:55.122184 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:55.122224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:55.149525 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:55.149555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:55.259828 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:55.259866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:55.286876 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:55.286905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:55.332115 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:55.332149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:55.359308 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:55.359340 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:55.444861 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:55.444901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:55.492994 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:55.493026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:55.512281 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:55.512312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:55.587576 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.089262 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:58.101573 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:58.101658 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:58.137991 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:58.138015 1225677 cri.go:89] found id: ""
	I1217 01:33:58.138024 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:58.138084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.142504 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:58.142579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:58.172313 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.172337 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.172343 1225677 cri.go:89] found id: ""
	I1217 01:33:58.172350 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:58.172446 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.176396 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.180282 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:58.180366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:58.211138 1225677 cri.go:89] found id: ""
	I1217 01:33:58.211171 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.211181 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:58.211193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:58.211257 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:58.243736 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.243759 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.243764 1225677 cri.go:89] found id: ""
	I1217 01:33:58.243773 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:58.243830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.247791 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.251576 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:58.251655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:58.288139 1225677 cri.go:89] found id: ""
	I1217 01:33:58.288173 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.288184 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:58.288193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:58.288255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:58.317667 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.317690 1225677 cri.go:89] found id: ""
	I1217 01:33:58.317700 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:58.317763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.321820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:58.321906 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:58.350850 1225677 cri.go:89] found id: ""
	I1217 01:33:58.350878 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.350888 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:58.350897 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:58.350910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.416830 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:58.416867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.444837 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:58.444868 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:58.528215 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:58.528263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:58.575846 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:58.575880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:58.595772 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:58.595807 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.650340 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:58.650375 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.701278 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:58.701316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.732779 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:58.732810 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:58.835274 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:58.835310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:58.910122 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.910207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:58.910236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.438103 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:01.448838 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:01.448920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:01.479627 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.479651 1225677 cri.go:89] found id: ""
	I1217 01:34:01.479678 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:01.479736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.483564 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:01.483634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:01.510339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.510364 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.510370 1225677 cri.go:89] found id: ""
	I1217 01:34:01.510378 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:01.510435 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.514437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.519025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:01.519139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:01.547434 1225677 cri.go:89] found id: ""
	I1217 01:34:01.547457 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.547466 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:01.547473 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:01.547530 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:01.574487 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.574508 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.574513 1225677 cri.go:89] found id: ""
	I1217 01:34:01.574520 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:01.574577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.578139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.581545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:01.581626 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:01.609342 1225677 cri.go:89] found id: ""
	I1217 01:34:01.609365 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.609374 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:01.609381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:01.609439 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:01.636506 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:01.636530 1225677 cri.go:89] found id: ""
	I1217 01:34:01.636540 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:01.636602 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.640274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:01.640388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:01.669875 1225677 cri.go:89] found id: ""
	I1217 01:34:01.669944 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.669969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:01.669993 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:01.670033 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.710653 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:01.710691 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.763990 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:01.764028 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.833068 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:01.833107 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.863940 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:01.864023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:01.967213 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:01.967254 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:01.992938 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:01.992972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:02.024381 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:02.024443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:02.106857 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:02.106896 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:02.143612 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:02.143646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:02.213706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:02.213729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:02.213742 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.741826 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:04.752958 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:04.753026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:04.783743 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.783762 1225677 cri.go:89] found id: ""
	I1217 01:34:04.783770 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:04.784150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.788287 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:04.788359 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:04.817040 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:04.817073 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:04.817079 1225677 cri.go:89] found id: ""
	I1217 01:34:04.817086 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:04.817147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.821094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.825495 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:04.825571 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:04.853100 1225677 cri.go:89] found id: ""
	I1217 01:34:04.853124 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.853133 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:04.853140 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:04.853202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:04.881403 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:04.881425 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:04.881430 1225677 cri.go:89] found id: ""
	I1217 01:34:04.881438 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:04.881502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.885516 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.889230 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:04.889353 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:04.915187 1225677 cri.go:89] found id: ""
	I1217 01:34:04.915219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.915229 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:04.915235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:04.915296 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:04.946769 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:04.946802 1225677 cri.go:89] found id: ""
	I1217 01:34:04.946811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:04.946884 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.951231 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:04.951339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:04.978082 1225677 cri.go:89] found id: ""
	I1217 01:34:04.978110 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.978120 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:04.978128 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:04.978166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:05.019076 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:05.019109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:05.101083 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:05.101161 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:05.177848 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:05.177870 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:05.177884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:05.204143 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:05.204172 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:05.268231 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:05.268268 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:05.297025 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:05.297054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:05.327881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:05.327911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:05.437319 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:05.437360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:05.456847 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:05.456883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:05.498209 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:05.498242 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.077748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:08.088818 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:08.088890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:08.126181 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.126213 1225677 cri.go:89] found id: ""
	I1217 01:34:08.126227 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:08.126292 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.131226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:08.131346 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:08.160808 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.160832 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.160837 1225677 cri.go:89] found id: ""
	I1217 01:34:08.160846 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:08.160923 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.166045 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.170405 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:08.170497 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:08.200928 1225677 cri.go:89] found id: ""
	I1217 01:34:08.200954 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.200964 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:08.200970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:08.201068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:08.237681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.237706 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.237711 1225677 cri.go:89] found id: ""
	I1217 01:34:08.237719 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:08.237794 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.241696 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.245486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:08.245561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:08.272543 1225677 cri.go:89] found id: ""
	I1217 01:34:08.272572 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.272582 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:08.272594 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:08.272676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:08.304603 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.304627 1225677 cri.go:89] found id: ""
	I1217 01:34:08.304635 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:08.304690 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.308617 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:08.308691 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:08.338781 1225677 cri.go:89] found id: ""
	I1217 01:34:08.338809 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.338818 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:08.338827 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:08.338839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:08.374627 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:08.374660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:08.472485 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:08.472523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:08.490991 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:08.491026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.574253 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:08.574292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.602049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:08.602118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:08.681328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:08.681348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:08.681361 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.708974 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:08.709000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.761284 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:08.761320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.819965 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:08.820006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.850377 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:08.850405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:11.432699 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:11.444142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:11.444218 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:11.477380 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.477404 1225677 cri.go:89] found id: ""
	I1217 01:34:11.477414 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:11.477475 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.481941 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:11.482014 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:11.510503 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.510529 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.510546 1225677 cri.go:89] found id: ""
	I1217 01:34:11.510554 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:11.510650 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.514842 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.518923 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:11.519013 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:11.546962 1225677 cri.go:89] found id: ""
	I1217 01:34:11.546990 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.547000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:11.547006 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:11.547080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:11.574757 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:11.574782 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:11.574787 1225677 cri.go:89] found id: ""
	I1217 01:34:11.574796 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:11.574877 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.579088 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.583273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:11.583402 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:11.613215 1225677 cri.go:89] found id: ""
	I1217 01:34:11.613244 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.613254 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:11.613261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:11.613326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:11.642127 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:11.642166 1225677 cri.go:89] found id: ""
	I1217 01:34:11.642175 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:11.642249 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.646180 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:11.646281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:11.676821 1225677 cri.go:89] found id: ""
	I1217 01:34:11.676848 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.676858 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:11.676868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:11.676880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:11.776881 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:11.776922 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:11.797665 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:11.797700 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:11.873871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:11.873895 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:11.873909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.901431 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:11.901461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.946983 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:11.947021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.993263 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:11.993299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:12.069104 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:12.069143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:12.101484 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:12.101511 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:12.137373 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:12.137404 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:12.219779 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:12.219833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:14.749747 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:14.760900 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:14.760971 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:14.789422 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:14.789504 1225677 cri.go:89] found id: ""
	I1217 01:34:14.789520 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:14.789579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.794016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:14.794094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:14.820779 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:14.820802 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:14.820808 1225677 cri.go:89] found id: ""
	I1217 01:34:14.820815 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:14.820892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.824759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.828502 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:14.828620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:14.855015 1225677 cri.go:89] found id: ""
	I1217 01:34:14.855042 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.855051 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:14.855058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:14.855118 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:14.882554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:14.882580 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:14.882586 1225677 cri.go:89] found id: ""
	I1217 01:34:14.882594 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:14.882649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.886723 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.890383 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:14.890487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:14.921014 1225677 cri.go:89] found id: ""
	I1217 01:34:14.921051 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.921077 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:14.921096 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:14.921186 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:14.950121 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:14.950151 1225677 cri.go:89] found id: ""
	I1217 01:34:14.950160 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:14.950235 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.954391 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:14.954491 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:14.981305 1225677 cri.go:89] found id: ""
	I1217 01:34:14.981381 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.981396 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:14.981406 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:14.981424 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:15.082515 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:15.082601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:15.115676 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:15.115766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:15.207150 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:15.207196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:15.253067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:15.253103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:15.282406 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:15.282434 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:15.332186 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:15.332232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:15.383617 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:15.383653 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:15.413724 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:15.413761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:15.512500 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:15.512539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:15.531712 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:15.531744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:15.607024 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.107382 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:18.125209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:18.125300 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:18.154715 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.154743 1225677 cri.go:89] found id: ""
	I1217 01:34:18.154759 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:18.154827 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.158989 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:18.159058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:18.186887 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.186906 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.186910 1225677 cri.go:89] found id: ""
	I1217 01:34:18.186918 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:18.186974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.191114 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.195016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:18.195088 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:18.230496 1225677 cri.go:89] found id: ""
	I1217 01:34:18.230522 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.230532 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:18.230541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:18.230603 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:18.257433 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.257453 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.257458 1225677 cri.go:89] found id: ""
	I1217 01:34:18.257466 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:18.257522 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.261223 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.264998 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:18.265077 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:18.298281 1225677 cri.go:89] found id: ""
	I1217 01:34:18.298359 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.298373 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:18.298381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:18.298438 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:18.326008 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:18.326029 1225677 cri.go:89] found id: ""
	I1217 01:34:18.326038 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:18.326094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.329952 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:18.330026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:18.355880 1225677 cri.go:89] found id: ""
	I1217 01:34:18.355914 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.355924 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:18.355956 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:18.355971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:18.430677 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:18.430716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:18.461146 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:18.461178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:18.483944 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:18.483976 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:18.558884 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.558914 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:18.558930 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.631593 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:18.631631 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.661399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:18.661431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:18.765933 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:18.765971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.798005 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:18.798035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.838207 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:18.838245 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.879939 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:18.879973 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.409362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:21.420285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:21.420355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:21.450399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:21.450424 1225677 cri.go:89] found id: ""
	I1217 01:34:21.450433 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:21.450488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.454541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:21.454613 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:21.484061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.484086 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:21.484091 1225677 cri.go:89] found id: ""
	I1217 01:34:21.484099 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:21.484156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.488024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.491648 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:21.491718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:21.522026 1225677 cri.go:89] found id: ""
	I1217 01:34:21.522052 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.522062 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:21.522071 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:21.522139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:21.554855 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.554887 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:21.554894 1225677 cri.go:89] found id: ""
	I1217 01:34:21.554902 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:21.554955 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.558520 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.562302 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:21.562407 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:21.590541 1225677 cri.go:89] found id: ""
	I1217 01:34:21.590564 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.590574 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:21.590580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:21.590636 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:21.626269 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.626340 1225677 cri.go:89] found id: ""
	I1217 01:34:21.626366 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:21.626428 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.630350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:21.630464 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:21.666471 1225677 cri.go:89] found id: ""
	I1217 01:34:21.666498 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.666507 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:21.666516 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:21.666533 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.706780 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:21.706815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.774693 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:21.774729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:21.861669 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:21.861713 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:21.977061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:21.977096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:22.003122 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:22.003171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:22.051916 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:22.051957 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:22.082713 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:22.082746 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:22.116010 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:22.116037 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:22.146809 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:22.146848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:22.228639 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:22.228703 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:22.228732 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.754744 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:24.765436 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:24.765518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:24.794628 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.794658 1225677 cri.go:89] found id: ""
	I1217 01:34:24.794667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:24.794732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.798378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:24.798454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:24.832756 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:24.832781 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:24.832787 1225677 cri.go:89] found id: ""
	I1217 01:34:24.832794 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:24.832850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.836854 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.840412 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:24.840572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:24.868168 1225677 cri.go:89] found id: ""
	I1217 01:34:24.868247 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.868270 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:24.868290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:24.868381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:24.899805 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:24.899825 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:24.899830 1225677 cri.go:89] found id: ""
	I1217 01:34:24.899838 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:24.899893 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.903464 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.906950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:24.907067 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:24.935718 1225677 cri.go:89] found id: ""
	I1217 01:34:24.935744 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.935753 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:24.935760 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:24.935818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:24.967779 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:24.967802 1225677 cri.go:89] found id: ""
	I1217 01:34:24.967811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:24.967863 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.971468 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:24.971534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:25.001724 1225677 cri.go:89] found id: ""
	I1217 01:34:25.001815 1225677 logs.go:282] 0 containers: []
	W1217 01:34:25.001842 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:25.001890 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:25.001925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:25.023512 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:25.023709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:25.051815 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:25.051848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:25.099451 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:25.099487 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:25.141801 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:25.141832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:25.178412 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:25.178444 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:25.285631 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:25.285667 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:25.362578 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:25.362602 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:25.362617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:25.403014 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:25.403050 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:25.510336 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:25.510395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:25.543551 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:25.543582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.129531 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:28.140763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:28.140832 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:28.184591 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.184616 1225677 cri.go:89] found id: ""
	I1217 01:34:28.184624 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:28.184707 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.188557 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:28.188634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:28.222629 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.222651 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.222656 1225677 cri.go:89] found id: ""
	I1217 01:34:28.222664 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:28.222724 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.226610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.230481 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:28.230575 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:28.257099 1225677 cri.go:89] found id: ""
	I1217 01:34:28.257126 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.257135 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:28.257142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:28.257220 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:28.291310 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:28.291347 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.291354 1225677 cri.go:89] found id: ""
	I1217 01:34:28.291388 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:28.291469 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.295342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.298970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:28.299075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:28.329122 1225677 cri.go:89] found id: ""
	I1217 01:34:28.329146 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.329155 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:28.329182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:28.329254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:28.359713 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.359736 1225677 cri.go:89] found id: ""
	I1217 01:34:28.359745 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:28.359803 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.363561 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:28.363633 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:28.397883 1225677 cri.go:89] found id: ""
	I1217 01:34:28.397910 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.397920 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:28.397929 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:28.397941 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.431945 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:28.431974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:28.482268 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:28.482300 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.509035 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:28.509067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.557586 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:28.557623 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.616155 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:28.616203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.647557 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:28.647590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.723102 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:28.723139 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:28.830255 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:28.830293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:28.849322 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:28.849355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:28.919883 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:28.919905 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:28.919926 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.492801 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:31.504000 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:31.504075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:31.539143 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.539163 1225677 cri.go:89] found id: ""
	I1217 01:34:31.539173 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:31.539228 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.543277 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:31.543355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:31.573251 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:31.573271 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:31.573275 1225677 cri.go:89] found id: ""
	I1217 01:34:31.573284 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:31.573337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.577458 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.581377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:31.581451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:31.612241 1225677 cri.go:89] found id: ""
	I1217 01:34:31.612270 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.612280 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:31.612286 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:31.612345 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:31.643539 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.643563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.643569 1225677 cri.go:89] found id: ""
	I1217 01:34:31.643578 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:31.643638 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.647841 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.651771 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:31.651855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:31.685384 1225677 cri.go:89] found id: ""
	I1217 01:34:31.685409 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.685418 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:31.685425 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:31.685487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:31.713458 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.713491 1225677 cri.go:89] found id: ""
	I1217 01:34:31.713501 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:31.713571 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.717510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:31.717598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:31.742954 1225677 cri.go:89] found id: ""
	I1217 01:34:31.742979 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.742989 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:31.742998 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:31.743030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:31.826689 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:31.826712 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:31.826726 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.858359 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:31.858389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.890466 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:31.890494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.920394 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:31.920516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:31.954114 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:31.954143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:32.048397 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:32.048463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:32.068978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:32.069014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:32.126891 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:32.126931 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:32.194493 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:32.194531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:32.278811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:32.278854 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:34.866004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:34.876932 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:34.877040 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:34.904525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:34.904548 1225677 cri.go:89] found id: ""
	I1217 01:34:34.904556 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:34.904634 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.908290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:34.908388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:34.937927 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:34.937962 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:34.937967 1225677 cri.go:89] found id: ""
	I1217 01:34:34.937975 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:34.938053 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.941844 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.945447 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:34.945529 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:34.974834 1225677 cri.go:89] found id: ""
	I1217 01:34:34.974860 1225677 logs.go:282] 0 containers: []
	W1217 01:34:34.974870 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:34.974876 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:34.974932 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:35.015100 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.015121 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.015126 1225677 cri.go:89] found id: ""
	I1217 01:34:35.015134 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:35.015196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.019378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.023124 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:35.023202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:35.055461 1225677 cri.go:89] found id: ""
	I1217 01:34:35.055488 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.055497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:35.055503 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:35.055561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:35.083009 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.083083 1225677 cri.go:89] found id: ""
	I1217 01:34:35.083107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:35.083195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.087719 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:35.087788 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:35.115588 1225677 cri.go:89] found id: ""
	I1217 01:34:35.115615 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.115625 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:35.115649 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:35.115664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:35.165942 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:35.165978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.194775 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:35.194803 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:35.291776 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:35.291811 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:35.338079 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:35.338110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:35.357793 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:35.357824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:35.428871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:35.428893 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:35.428905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.499513 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:35.499548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.540136 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:35.540211 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:35.636873 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:35.636913 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:35.665818 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:35.665889 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.220553 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:38.231749 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:38.231823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:38.259479 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.259500 1225677 cri.go:89] found id: ""
	I1217 01:34:38.259509 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:38.259568 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.263241 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:38.263385 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:38.295256 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.295292 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.295301 1225677 cri.go:89] found id: ""
	I1217 01:34:38.295310 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:38.295378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.300468 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.305174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:38.305294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:38.339161 1225677 cri.go:89] found id: ""
	I1217 01:34:38.339194 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.339204 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:38.339210 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:38.339275 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:38.367494 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.367518 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:38.367524 1225677 cri.go:89] found id: ""
	I1217 01:34:38.367531 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:38.367608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.371441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.375084 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:38.375191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:38.401755 1225677 cri.go:89] found id: ""
	I1217 01:34:38.401784 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.401795 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:38.401801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:38.401890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:38.429928 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.429962 1225677 cri.go:89] found id: ""
	I1217 01:34:38.429971 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:38.430044 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.433894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:38.433965 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:38.461088 1225677 cri.go:89] found id: ""
	I1217 01:34:38.461114 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.461124 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:38.461133 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:38.461144 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:38.544237 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:38.544274 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.574281 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:38.574312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.620093 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:38.620131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.674826 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:38.674902 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.752562 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:38.752603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.781494 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:38.781527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:38.833674 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:38.833706 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:38.933793 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:38.933832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:38.953733 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:38.953782 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:39.029298 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:39.029322 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:39.029336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.557003 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:41.568311 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:41.568412 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:41.601070 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:41.601089 1225677 cri.go:89] found id: ""
	I1217 01:34:41.601097 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:41.601156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.605150 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:41.605227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:41.633863 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:41.633887 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:41.633893 1225677 cri.go:89] found id: ""
	I1217 01:34:41.633901 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:41.633958 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.638555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.644087 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:41.644168 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:41.684237 1225677 cri.go:89] found id: ""
	I1217 01:34:41.684276 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.684287 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:41.684294 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:41.684371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:41.717925 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:41.717993 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.718016 1225677 cri.go:89] found id: ""
	I1217 01:34:41.718032 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:41.718109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.722478 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.726529 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:41.726607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:41.754525 1225677 cri.go:89] found id: ""
	I1217 01:34:41.754552 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.754562 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:41.754571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:41.754673 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:41.784794 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:41.784860 1225677 cri.go:89] found id: ""
	I1217 01:34:41.784883 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:41.784969 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.788882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:41.788980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:41.825117 1225677 cri.go:89] found id: ""
	I1217 01:34:41.825193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.825216 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:41.825233 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:41.825259 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:41.934154 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:41.934191 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:41.955231 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:41.955263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:42.023779 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:42.023819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:42.054183 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:42.054218 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:42.146898 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:42.147005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:42.249519 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:42.249543 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:42.249557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:42.280803 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:42.280833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:42.327682 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:42.327731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:42.373795 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:42.373832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:42.415409 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:42.415437 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:44.951197 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:44.962939 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:44.963016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:44.996268 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:44.996297 1225677 cri.go:89] found id: ""
	I1217 01:34:44.996306 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:44.996365 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.016281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:45.016367 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:45.152354 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.152375 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.152380 1225677 cri.go:89] found id: ""
	I1217 01:34:45.152389 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:45.152473 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.161519 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.169793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:45.169869 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:45.269649 1225677 cri.go:89] found id: ""
	I1217 01:34:45.269685 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.269696 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:45.269715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:45.269816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:45.322137 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.322210 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:45.322250 1225677 cri.go:89] found id: ""
	I1217 01:34:45.322320 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:45.322406 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.327229 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.331531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:45.331703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:45.362501 1225677 cri.go:89] found id: ""
	I1217 01:34:45.362571 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.362602 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:45.362624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:45.362696 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:45.394160 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.394240 1225677 cri.go:89] found id: ""
	I1217 01:34:45.394258 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:45.394335 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.398315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:45.398397 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:45.426737 1225677 cri.go:89] found id: ""
	I1217 01:34:45.426780 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.426790 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:45.426819 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:45.426839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:45.503383 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:45.503464 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:45.503485 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:45.535637 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:45.535672 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.583362 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:45.583398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.613182 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:45.613214 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:45.695579 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:45.695626 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:45.729534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:45.729563 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:45.826222 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:45.826262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:45.846157 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:45.846195 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.911389 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:45.911426 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.983046 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:45.983084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.519530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:48.530493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:48.530565 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:48.560366 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:48.560471 1225677 cri.go:89] found id: ""
	I1217 01:34:48.560496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:48.560585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.564848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:48.564920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:48.593560 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.593628 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:48.593666 1225677 cri.go:89] found id: ""
	I1217 01:34:48.593696 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:48.593783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.597895 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.601634 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:48.601718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:48.631022 1225677 cri.go:89] found id: ""
	I1217 01:34:48.631048 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.631057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:48.631064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:48.631122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:48.656804 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:48.656829 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.656834 1225677 cri.go:89] found id: ""
	I1217 01:34:48.656841 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:48.656898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.660979 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.664698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:48.664770 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:48.692344 1225677 cri.go:89] found id: ""
	I1217 01:34:48.692372 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.692383 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:48.692389 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:48.692481 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:48.721997 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:48.722020 1225677 cri.go:89] found id: ""
	I1217 01:34:48.722029 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:48.722111 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.726120 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:48.726247 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:48.753313 1225677 cri.go:89] found id: ""
	I1217 01:34:48.753339 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.753349 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:48.753358 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:48.753388 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:48.849435 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:48.849474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:48.870486 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:48.870523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:48.943874 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:48.943904 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:48.943919 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.991171 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:48.991205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:49.020622 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:49.020649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:49.064904 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:49.064942 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:49.143148 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:49.143186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:49.174999 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:49.175086 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:49.209127 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:49.209156 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:49.296275 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:49.296325 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:51.840412 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:51.851134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:51.851204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:51.880791 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:51.880811 1225677 cri.go:89] found id: ""
	I1217 01:34:51.880820 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:51.880879 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.884883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:51.884962 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:51.911511 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:51.911535 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:51.911541 1225677 cri.go:89] found id: ""
	I1217 01:34:51.911549 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:51.911607 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.915352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.918918 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:51.918986 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:51.950127 1225677 cri.go:89] found id: ""
	I1217 01:34:51.950152 1225677 logs.go:282] 0 containers: []
	W1217 01:34:51.950163 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:51.950169 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:51.950266 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:51.978696 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:51.978725 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:51.978731 1225677 cri.go:89] found id: ""
	I1217 01:34:51.978738 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:51.978795 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.982736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.986411 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:51.986482 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:52.016886 1225677 cri.go:89] found id: ""
	I1217 01:34:52.016911 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.016920 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:52.016926 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:52.016989 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:52.045870 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.045895 1225677 cri.go:89] found id: ""
	I1217 01:34:52.045904 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:52.045962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:52.049906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:52.049977 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:52.077565 1225677 cri.go:89] found id: ""
	I1217 01:34:52.077592 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.077604 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:52.077614 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:52.077646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:52.105176 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:52.105205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:52.211964 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:52.211999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:52.252350 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:52.252382 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:52.306053 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:52.306088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:52.376262 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:52.376302 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:52.403480 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:52.403508 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.431952 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:52.431983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:52.510953 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:52.510990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:52.555450 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:52.555482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:52.574086 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:52.574119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:52.644412 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.144646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:55.155615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:55.155693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:55.184697 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.184716 1225677 cri.go:89] found id: ""
	I1217 01:34:55.184724 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:55.184781 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.188462 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:55.188538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:55.217937 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.217961 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.217966 1225677 cri.go:89] found id: ""
	I1217 01:34:55.217974 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:55.218030 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.221924 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.226643 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:55.226714 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:55.254617 1225677 cri.go:89] found id: ""
	I1217 01:34:55.254645 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.254655 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:55.254662 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:55.254721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:55.282393 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.282419 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.282424 1225677 cri.go:89] found id: ""
	I1217 01:34:55.282432 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:55.282485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.286357 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.289912 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:55.289992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:55.316252 1225677 cri.go:89] found id: ""
	I1217 01:34:55.316278 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.316288 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:55.316295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:55.316368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:55.343249 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.343314 1225677 cri.go:89] found id: ""
	I1217 01:34:55.343337 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:55.343433 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.347319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:55.347448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:55.381545 1225677 cri.go:89] found id: ""
	I1217 01:34:55.381629 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.381645 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:55.381656 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:55.381669 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.421981 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:55.422014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.453301 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:55.453342 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.480646 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:55.480687 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:55.570826 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.570849 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:55.570863 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.599216 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:55.599257 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.658218 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:55.658310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.745919 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:55.745955 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:55.838064 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:55.838101 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:55.888374 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:55.888405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:55.996293 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:55.996331 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:58.522397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:58.536202 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:58.536271 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:58.566870 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:58.566965 1225677 cri.go:89] found id: ""
	I1217 01:34:58.566994 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:58.567139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.571283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:58.571363 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:58.598180 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:58.598208 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.598213 1225677 cri.go:89] found id: ""
	I1217 01:34:58.598222 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:58.598297 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.602201 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.605913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:58.605997 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:58.636167 1225677 cri.go:89] found id: ""
	I1217 01:34:58.636193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.636202 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:58.636209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:58.636270 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:58.662111 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:58.662135 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:58.662140 1225677 cri.go:89] found id: ""
	I1217 01:34:58.662148 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:58.662209 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.666315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.670253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:58.670348 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:58.696144 1225677 cri.go:89] found id: ""
	I1217 01:34:58.696219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.696244 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:58.696265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:58.696347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:58.726742 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.726767 1225677 cri.go:89] found id: ""
	I1217 01:34:58.726776 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:58.726832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.730710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:58.730785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:58.759394 1225677 cri.go:89] found id: ""
	I1217 01:34:58.759421 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.759431 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:58.759440 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:58.759454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.817531 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:58.817569 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.847360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:58.847389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:58.929741 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:58.929776 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:58.968951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:58.968982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:59.043218 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:59.043239 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:59.043255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:59.070405 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:59.070431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:59.146784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:59.146829 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:59.179445 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:59.179479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:59.286441 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:59.286479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:59.308412 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:59.308540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.850397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:01.863234 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:01.863368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:01.898442 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:01.898473 1225677 cri.go:89] found id: ""
	I1217 01:35:01.898484 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:01.898577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.903064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:01.903142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:01.936524 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.936547 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:01.936551 1225677 cri.go:89] found id: ""
	I1217 01:35:01.936559 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:01.936625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.942865 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.947963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:01.948071 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:01.979359 1225677 cri.go:89] found id: ""
	I1217 01:35:01.979384 1225677 logs.go:282] 0 containers: []
	W1217 01:35:01.979393 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:01.979399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:01.979466 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:02.012882 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.012925 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.012931 1225677 cri.go:89] found id: ""
	I1217 01:35:02.012975 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:02.013055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.017605 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.021797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:02.021870 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:02.049550 1225677 cri.go:89] found id: ""
	I1217 01:35:02.049621 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.049638 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:02.049646 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:02.049722 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:02.081301 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.081326 1225677 cri.go:89] found id: ""
	I1217 01:35:02.081335 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:02.081392 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.086118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:02.086210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:02.125352 1225677 cri.go:89] found id: ""
	I1217 01:35:02.125374 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.125383 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:02.125393 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:02.125405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:02.197255 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:02.197318 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:02.197355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:02.226446 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:02.226488 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:02.271257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:02.271293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:02.314955 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:02.314988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.386430 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:02.386468 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.417607 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:02.417682 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.449011 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:02.449041 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:02.551859 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:02.551899 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:02.571928 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:02.571960 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:02.659356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:02.659395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:05.190765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:05.203695 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:05.203771 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:05.238686 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.238707 1225677 cri.go:89] found id: ""
	I1217 01:35:05.238716 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:05.238778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.242613 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:05.242687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:05.272627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.272661 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.272667 1225677 cri.go:89] found id: ""
	I1217 01:35:05.272675 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:05.272757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.277184 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.281337 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:05.281414 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:05.309340 1225677 cri.go:89] found id: ""
	I1217 01:35:05.309361 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.309370 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:05.309377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:05.309437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:05.342268 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.342294 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.342300 1225677 cri.go:89] found id: ""
	I1217 01:35:05.342308 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:05.342394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.346668 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.350724 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:05.350805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:05.378257 1225677 cri.go:89] found id: ""
	I1217 01:35:05.378289 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.378298 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:05.378305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:05.378366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:05.406348 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.406370 1225677 cri.go:89] found id: ""
	I1217 01:35:05.406379 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:05.406455 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.410653 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:05.410724 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:05.441777 1225677 cri.go:89] found id: ""
	I1217 01:35:05.441802 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.441812 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:05.441820 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:05.441832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:05.521081 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:05.521113 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:05.521127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.559491 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:05.559525 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.608690 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:05.608727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.640635 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:05.640666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:05.720771 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:05.720808 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:05.824388 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:05.824427 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.864839 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:05.864871 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.960476 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:05.960520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.992555 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:05.992588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:06.045891 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:06.045925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:08.568611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:08.579598 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:08.579681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:08.607399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.607421 1225677 cri.go:89] found id: ""
	I1217 01:35:08.607430 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:08.607485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.611906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:08.611982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:08.638447 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.638470 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:08.638476 1225677 cri.go:89] found id: ""
	I1217 01:35:08.638484 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:08.638558 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.642337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.646066 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:08.646162 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:08.673000 1225677 cri.go:89] found id: ""
	I1217 01:35:08.673026 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.673036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:08.673042 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:08.673135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:08.701768 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:08.701792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:08.701798 1225677 cri.go:89] found id: ""
	I1217 01:35:08.701806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:08.701892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.705733 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.709545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:08.709620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:08.736283 1225677 cri.go:89] found id: ""
	I1217 01:35:08.736309 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.736319 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:08.736325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:08.736383 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:08.763589 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:08.763610 1225677 cri.go:89] found id: ""
	I1217 01:35:08.763618 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:08.763679 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.768008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:08.768157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:08.794921 1225677 cri.go:89] found id: ""
	I1217 01:35:08.794948 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.794957 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:08.794967 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:08.795003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:08.866335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:08.866356 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:08.866371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.894862 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:08.894894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.945712 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:08.945749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:09.030175 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:09.030213 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:09.057626 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:09.057656 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:09.140070 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:09.140109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:09.249646 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:09.249685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:09.269874 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:09.269906 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:09.317090 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:09.317126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:09.346482 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:09.346513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:11.877651 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:11.889575 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:11.889645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:11.917211 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:11.917234 1225677 cri.go:89] found id: ""
	I1217 01:35:11.917243 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:11.917309 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.921144 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:11.921223 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:11.955516 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:11.955536 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:11.955541 1225677 cri.go:89] found id: ""
	I1217 01:35:11.955548 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:11.955604 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.959308 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.962862 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:11.962933 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:11.991261 1225677 cri.go:89] found id: ""
	I1217 01:35:11.991284 1225677 logs.go:282] 0 containers: []
	W1217 01:35:11.991293 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:11.991299 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:11.991366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:12.023452 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.023477 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.023483 1225677 cri.go:89] found id: ""
	I1217 01:35:12.023491 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:12.023581 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.027715 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.031641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:12.031751 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:12.059135 1225677 cri.go:89] found id: ""
	I1217 01:35:12.059211 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.059234 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:12.059255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:12.059343 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:12.092809 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.092830 1225677 cri.go:89] found id: ""
	I1217 01:35:12.092839 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:12.092915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.096814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:12.096963 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:12.132911 1225677 cri.go:89] found id: ""
	I1217 01:35:12.132936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.132946 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:12.132955 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:12.132966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:12.235310 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:12.235346 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:12.255554 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:12.255587 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:12.303522 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:12.303560 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.374998 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:12.375032 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:12.461333 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:12.461371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:12.547450 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:12.547475 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:12.547489 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:12.574864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:12.574892 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:12.619775 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:12.619816 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.649040 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:12.649123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.677296 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:12.677326 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.212228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:15.225138 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:15.225215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:15.259192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.259218 1225677 cri.go:89] found id: ""
	I1217 01:35:15.259228 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:15.259287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.263205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:15.263279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:15.290493 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.290516 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.290521 1225677 cri.go:89] found id: ""
	I1217 01:35:15.290529 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:15.290588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.294490 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.298107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:15.298208 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:15.325021 1225677 cri.go:89] found id: ""
	I1217 01:35:15.325047 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.325057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:15.325063 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:15.325125 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:15.353712 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.353744 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.353750 1225677 cri.go:89] found id: ""
	I1217 01:35:15.353758 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:15.353828 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.357883 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.361729 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:15.361817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:15.389342 1225677 cri.go:89] found id: ""
	I1217 01:35:15.389370 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.389379 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:15.389386 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:15.389449 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:15.418437 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.418470 1225677 cri.go:89] found id: ""
	I1217 01:35:15.418479 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:15.418553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.422466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:15.422548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:15.449297 1225677 cri.go:89] found id: ""
	I1217 01:35:15.449333 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.449343 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:15.449370 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:15.449394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:15.468355 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:15.468385 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.494969 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:15.495005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.543170 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:15.543209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:15.616803 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:15.616829 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:15.616845 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.659996 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:15.660031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.730995 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:15.731034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.758963 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:15.758994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.785562 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:15.785633 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:15.872457 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:15.872494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.904808 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:15.904838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:18.506161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:18.518520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:18.518589 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:18.550949 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.550972 1225677 cri.go:89] found id: ""
	I1217 01:35:18.550982 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:18.551041 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.554800 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:18.554880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:18.582497 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:18.582522 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.582527 1225677 cri.go:89] found id: ""
	I1217 01:35:18.582535 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:18.582594 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.586831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.590486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:18.590560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:18.617401 1225677 cri.go:89] found id: ""
	I1217 01:35:18.617426 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.617436 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:18.617443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:18.617504 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:18.648400 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:18.648458 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.648464 1225677 cri.go:89] found id: ""
	I1217 01:35:18.648472 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:18.648530 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.652380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.655820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:18.655916 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:18.689519 1225677 cri.go:89] found id: ""
	I1217 01:35:18.689544 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.689553 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:18.689560 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:18.689621 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:18.718284 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:18.718306 1225677 cri.go:89] found id: ""
	I1217 01:35:18.718313 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:18.718368 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.722268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:18.722372 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:18.753514 1225677 cri.go:89] found id: ""
	I1217 01:35:18.753542 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.753558 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:18.753567 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:18.753611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:18.771813 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:18.771842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:18.845441 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:18.845463 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:18.845477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.872553 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:18.872582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.922099 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:18.922176 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.950258 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:18.950285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:18.990211 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:18.990241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:19.031127 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:19.031164 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:19.107071 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:19.107109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:19.138299 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:19.138327 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:19.222624 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:19.222660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:21.834640 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:21.845711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:21.845784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:21.895249 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:21.895280 1225677 cri.go:89] found id: ""
	I1217 01:35:21.895292 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:21.895371 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.902322 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:21.902404 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:21.943815 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:21.943857 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:21.943863 1225677 cri.go:89] found id: ""
	I1217 01:35:21.943877 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:21.943963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.949206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.954547 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:21.954640 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:21.988594 1225677 cri.go:89] found id: ""
	I1217 01:35:21.988620 1225677 logs.go:282] 0 containers: []
	W1217 01:35:21.988630 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:21.988636 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:21.988718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:22.024625 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.024646 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.024651 1225677 cri.go:89] found id: ""
	I1217 01:35:22.024660 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:22.024760 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.029143 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.033935 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:22.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:22.067922 1225677 cri.go:89] found id: ""
	I1217 01:35:22.067946 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.067955 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:22.067961 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:22.068020 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:22.097619 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.097641 1225677 cri.go:89] found id: ""
	I1217 01:35:22.097649 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:22.097706 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.101692 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:22.101766 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:22.136868 1225677 cri.go:89] found id: ""
	I1217 01:35:22.136891 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.136900 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:22.136911 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:22.136923 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:22.164209 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:22.164236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:22.208399 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:22.208512 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:22.256618 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:22.256650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.287201 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:22.287237 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.314443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:22.314472 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:22.346752 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:22.346780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:22.445530 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:22.445567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:22.464378 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:22.464409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.554715 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:22.554749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:22.659061 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:22.659103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:22.731143 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.231455 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:25.242812 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:25.242949 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:25.280443 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.280470 1225677 cri.go:89] found id: ""
	I1217 01:35:25.280478 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:25.280536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.284885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:25.285008 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:25.313823 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.313846 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.313852 1225677 cri.go:89] found id: ""
	I1217 01:35:25.313859 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:25.313939 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.317952 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.321539 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:25.321620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:25.354565 1225677 cri.go:89] found id: ""
	I1217 01:35:25.354632 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.354656 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:25.354681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:25.354777 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:25.386743 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.386774 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.386779 1225677 cri.go:89] found id: ""
	I1217 01:35:25.386787 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:25.386857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.390671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.394226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:25.394339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:25.421123 1225677 cri.go:89] found id: ""
	I1217 01:35:25.421212 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.421228 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:25.421236 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:25.421310 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:25.448879 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.448904 1225677 cri.go:89] found id: ""
	I1217 01:35:25.448913 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:25.448971 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.452707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:25.452782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:25.479351 1225677 cri.go:89] found id: ""
	I1217 01:35:25.479379 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.479389 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:25.479399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:25.479410 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:25.577317 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:25.577354 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:25.600156 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:25.600203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:25.679524 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.679600 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:25.679621 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.706792 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:25.706824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.764895 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:25.764934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.796158 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:25.796188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.823684 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:25.823721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:25.857273 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:25.857303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.915963 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:25.916003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.992485 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:25.992520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:28.577965 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:28.588733 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:28.588802 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:28.621192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.621211 1225677 cri.go:89] found id: ""
	I1217 01:35:28.621220 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:28.621279 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.625055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:28.625124 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:28.651718 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:28.651738 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.651742 1225677 cri.go:89] found id: ""
	I1217 01:35:28.651749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:28.651807 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.656353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.660550 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:28.660620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:28.688556 1225677 cri.go:89] found id: ""
	I1217 01:35:28.688580 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.688589 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:28.688596 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:28.688654 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:28.716478 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:28.716503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:28.716508 1225677 cri.go:89] found id: ""
	I1217 01:35:28.716516 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:28.716603 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.720442 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.723785 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:28.723862 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:28.750780 1225677 cri.go:89] found id: ""
	I1217 01:35:28.750807 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.750817 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:28.750823 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:28.750882 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:28.777746 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:28.777772 1225677 cri.go:89] found id: ""
	I1217 01:35:28.777781 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:28.777836 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.781586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:28.781707 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:28.812032 1225677 cri.go:89] found id: ""
	I1217 01:35:28.812062 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.812072 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:28.812081 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:28.812115 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:28.910028 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:28.910067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.938533 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:28.938565 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.982530 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:28.982566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:29.059912 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:29.059948 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:29.087417 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:29.087449 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:29.141591 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:29.141622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:29.162662 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:29.162694 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:29.245511 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:29.245537 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:29.245553 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:29.286747 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:29.286784 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:29.317045 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:29.317075 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:31.896935 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:31.908531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:31.908605 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:31.951663 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:31.951684 1225677 cri.go:89] found id: ""
	I1217 01:35:31.951692 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:31.951746 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.956325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:31.956501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:31.990512 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:31.990578 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:31.990598 1225677 cri.go:89] found id: ""
	I1217 01:35:31.990625 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:31.990708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.994957 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.001450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:32.001597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:32.033107 1225677 cri.go:89] found id: ""
	I1217 01:35:32.033136 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.033146 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:32.033153 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:32.033245 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:32.061118 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.061140 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.061145 1225677 cri.go:89] found id: ""
	I1217 01:35:32.061153 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:32.061208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.065195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.068963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:32.069066 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:32.099914 1225677 cri.go:89] found id: ""
	I1217 01:35:32.099941 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.099951 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:32.099957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:32.100018 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:32.134003 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.134028 1225677 cri.go:89] found id: ""
	I1217 01:35:32.134044 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:32.134101 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.138837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:32.138909 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:32.178095 1225677 cri.go:89] found id: ""
	I1217 01:35:32.178168 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.178193 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:32.178210 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:32.178223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:32.219018 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:32.219049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:32.328076 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:32.328182 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:32.347854 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:32.347887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:32.389069 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:32.389143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.464016 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:32.464052 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.492348 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:32.492466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.519965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:32.520035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:32.589420 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:32.589485 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:32.589506 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:32.615780 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:32.615814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:32.668491 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:32.668527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.253556 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:35.266266 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:35.266344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:35.303632 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.303658 1225677 cri.go:89] found id: ""
	I1217 01:35:35.303667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:35.303726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.307439 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:35.307511 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:35.336107 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.336131 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.336136 1225677 cri.go:89] found id: ""
	I1217 01:35:35.336143 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:35.336196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.340106 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.343587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:35.343667 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:35.374453 1225677 cri.go:89] found id: ""
	I1217 01:35:35.374483 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.374492 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:35.374498 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:35.374560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:35.401769 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.401792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.401798 1225677 cri.go:89] found id: ""
	I1217 01:35:35.401806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:35.401860 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.405507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.409182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:35.409254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:35.437191 1225677 cri.go:89] found id: ""
	I1217 01:35:35.437229 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.437280 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:35.437303 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:35.437454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:35.464026 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.464048 1225677 cri.go:89] found id: ""
	I1217 01:35:35.464056 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:35.464113 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.467752 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:35.467854 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:35.495119 1225677 cri.go:89] found id: ""
	I1217 01:35:35.495143 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.495152 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:35.495161 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:35.495173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.538118 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:35.538157 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.612361 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:35.612398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.642424 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:35.642454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.671140 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:35.671168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.753840 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:35.753879 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:35.791176 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:35.791207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:35.861567 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:35.861588 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:35.861604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.887544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:35.887573 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.930868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:35.930901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:36.035955 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:36.035997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.556940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:38.568341 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:38.568410 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:38.602139 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:38.602163 1225677 cri.go:89] found id: ""
	I1217 01:35:38.602172 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:38.602234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.606168 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:38.606244 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:38.636762 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:38.636782 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:38.636787 1225677 cri.go:89] found id: ""
	I1217 01:35:38.636795 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:38.636849 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.640703 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.644870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:38.644980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:38.672028 1225677 cri.go:89] found id: ""
	I1217 01:35:38.672105 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.672130 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:38.672152 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:38.672252 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:38.702063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:38.702088 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:38.702096 1225677 cri.go:89] found id: ""
	I1217 01:35:38.702104 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:38.702189 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.706075 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.710843 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:38.710923 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:38.739176 1225677 cri.go:89] found id: ""
	I1217 01:35:38.739204 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.739214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:38.739221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:38.739281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:38.765721 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:38.765749 1225677 cri.go:89] found id: ""
	I1217 01:35:38.765759 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:38.765835 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.769950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:38.770026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:38.797985 1225677 cri.go:89] found id: ""
	I1217 01:35:38.798013 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.798023 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:38.798033 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:38.798065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:38.898407 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:38.898448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.917886 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:38.917920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:38.999335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:38.999368 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:38.999384 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:39.041692 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:39.041729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:39.089675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:39.089712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:39.172952 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:39.172988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:39.211704 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:39.211736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:39.241891 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:39.241920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:39.276958 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:39.276988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:39.364067 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:39.364119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:41.897002 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:41.908024 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:41.908100 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:41.937482 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:41.937556 1225677 cri.go:89] found id: ""
	I1217 01:35:41.937569 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:41.937630 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.941542 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:41.941611 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:41.987116 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:41.987139 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:41.987145 1225677 cri.go:89] found id: ""
	I1217 01:35:41.987153 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:41.987206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.991091 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.994831 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:41.994905 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:42.033990 1225677 cri.go:89] found id: ""
	I1217 01:35:42.034016 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.034025 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:42.034031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:42.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:42.065878 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:42.065959 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.065980 1225677 cri.go:89] found id: ""
	I1217 01:35:42.066005 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:42.066122 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.071367 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.076378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:42.076531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:42.123414 1225677 cri.go:89] found id: ""
	I1217 01:35:42.123521 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.123583 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:42.123610 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:42.123706 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:42.163210 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.163302 1225677 cri.go:89] found id: ""
	I1217 01:35:42.163328 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:42.163431 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.168650 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:42.168758 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:42.211741 1225677 cri.go:89] found id: ""
	I1217 01:35:42.211767 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.211777 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:42.211787 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:42.211800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:42.252091 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:42.252126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:42.356409 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:42.356465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:42.377129 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:42.377163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:42.449855 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:42.449879 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:42.449893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:42.476498 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:42.476530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:42.518303 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:42.518337 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.548819 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:42.548852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.578811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:42.578840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:42.658356 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:42.658395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:42.700126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:42.700173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.276979 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:45.301570 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:45.301737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:45.339316 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:45.339342 1225677 cri.go:89] found id: ""
	I1217 01:35:45.339351 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:45.339441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.343543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:45.343652 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:45.374479 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.374552 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.374574 1225677 cri.go:89] found id: ""
	I1217 01:35:45.374600 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:45.374672 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.378901 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.382870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:45.382942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:45.413785 1225677 cri.go:89] found id: ""
	I1217 01:35:45.413816 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.413825 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:45.413832 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:45.413894 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:45.446395 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.446417 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.446423 1225677 cri.go:89] found id: ""
	I1217 01:35:45.446431 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:45.446508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.450414 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.454372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:45.454448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:45.483846 1225677 cri.go:89] found id: ""
	I1217 01:35:45.483918 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.483942 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:45.483963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:45.484039 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:45.515890 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.515962 1225677 cri.go:89] found id: ""
	I1217 01:35:45.515986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:45.516060 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.519980 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:45.520107 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:45.548900 1225677 cri.go:89] found id: ""
	I1217 01:35:45.548984 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.549001 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:45.549011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:45.549023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.594641 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:45.594680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.623072 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:45.623171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:45.701558 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:45.701599 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:45.775358 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:45.775423 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:45.775443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.822675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:45.822712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.904212 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:45.904249 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.934553 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:45.934581 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:45.966200 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:45.966231 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:46.073612 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:46.073651 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:46.092826 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:46.092860 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.626362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:48.637081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:48.637157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:48.663951 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.664018 1225677 cri.go:89] found id: ""
	I1217 01:35:48.664045 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:48.664137 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.667889 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:48.668007 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:48.695424 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:48.695498 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:48.695518 1225677 cri.go:89] found id: ""
	I1217 01:35:48.695570 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:48.695667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.699980 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.703779 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:48.703875 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:48.731347 1225677 cri.go:89] found id: ""
	I1217 01:35:48.731372 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.731381 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:48.731388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:48.731448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:48.761776 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:48.761802 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:48.761808 1225677 cri.go:89] found id: ""
	I1217 01:35:48.761816 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:48.761875 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.766072 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.769796 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:48.769871 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:48.799377 1225677 cri.go:89] found id: ""
	I1217 01:35:48.799404 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.799412 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:48.799418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:48.799477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:48.828149 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:48.828173 1225677 cri.go:89] found id: ""
	I1217 01:35:48.828192 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:48.828254 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.832599 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:48.832717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:48.858554 1225677 cri.go:89] found id: ""
	I1217 01:35:48.858587 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.858597 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:48.858626 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:48.858643 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:48.894472 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:48.894502 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:48.969952 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:48.969978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:48.969994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:49.014023 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:49.014058 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:49.092630 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:49.092671 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:49.197053 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:49.197088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:49.225929 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:49.225963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:49.253145 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:49.253174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:49.301391 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:49.301428 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:49.337786 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:49.337819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:49.367000 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:49.367029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:51.942903 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:51.957586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:51.957662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:52.007996 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.008017 1225677 cri.go:89] found id: ""
	I1217 01:35:52.008026 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:52.008082 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.015080 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:52.015148 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:52.052213 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.052249 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.052255 1225677 cri.go:89] found id: ""
	I1217 01:35:52.052262 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:52.052318 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.056182 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.059959 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:52.060033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:52.090239 1225677 cri.go:89] found id: ""
	I1217 01:35:52.090264 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.090274 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:52.090281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:52.090341 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:52.118854 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:52.118874 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.118879 1225677 cri.go:89] found id: ""
	I1217 01:35:52.118886 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:52.118946 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.125093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.128837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:52.128931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:52.157907 1225677 cri.go:89] found id: ""
	I1217 01:35:52.157936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.157945 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:52.157957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:52.158017 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:52.191428 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.191451 1225677 cri.go:89] found id: ""
	I1217 01:35:52.191459 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:52.191543 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.195375 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:52.195456 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:52.224407 1225677 cri.go:89] found id: ""
	I1217 01:35:52.224468 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.224477 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:52.224486 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:52.224498 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.252950 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:52.252981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.279228 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:52.279258 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:52.298974 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:52.299007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:52.370510 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:52.370544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:52.370588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.418893 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:52.418934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:52.499956 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:52.499992 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:52.542158 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:52.542187 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:52.643325 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:52.643367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.671238 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:52.671267 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.712214 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:52.712252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.294635 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:55.305795 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:55.305897 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:55.341120 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.341143 1225677 cri.go:89] found id: ""
	I1217 01:35:55.341152 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:55.341208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.345154 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:55.345236 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:55.376865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.376937 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.376959 1225677 cri.go:89] found id: ""
	I1217 01:35:55.376982 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:55.377065 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.381380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.385355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:55.385472 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:55.412679 1225677 cri.go:89] found id: ""
	I1217 01:35:55.412701 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.412710 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:55.412716 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:55.412773 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:55.439554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.439573 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.439578 1225677 cri.go:89] found id: ""
	I1217 01:35:55.439585 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:55.439639 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.443337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.446737 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:55.446804 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:55.478015 1225677 cri.go:89] found id: ""
	I1217 01:35:55.478039 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.478052 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:55.478065 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:55.478136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:55.503877 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:55.503940 1225677 cri.go:89] found id: ""
	I1217 01:35:55.503964 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:55.504038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.507809 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:55.507880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:55.539899 1225677 cri.go:89] found id: ""
	I1217 01:35:55.539926 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.539935 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:55.539951 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:55.539963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:55.642073 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:55.642111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:55.662102 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:55.662143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.689162 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:55.689192 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.728771 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:55.728804 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.755851 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:55.755878 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:55.839759 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:55.839805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:55.910162 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:55.910183 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:55.910197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.962626 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:55.962664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:56.057075 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:56.057126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:56.095037 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:56.095069 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:58.632280 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:58.643092 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:58.643199 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:58.670245 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:58.670268 1225677 cri.go:89] found id: ""
	I1217 01:35:58.670277 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:58.670332 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.673988 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:58.674059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:58.706113 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:58.706135 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:58.706140 1225677 cri.go:89] found id: ""
	I1217 01:35:58.706148 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:58.706234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.710732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.714631 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:58.714747 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:58.742956 1225677 cri.go:89] found id: ""
	I1217 01:35:58.742982 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.742991 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:58.742997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:58.743058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:58.774022 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:58.774044 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:58.774050 1225677 cri.go:89] found id: ""
	I1217 01:35:58.774058 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:58.774112 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.778073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.781607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:58.781686 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:58.808679 1225677 cri.go:89] found id: ""
	I1217 01:35:58.808703 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.808719 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:58.808725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:58.808785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:58.835922 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:58.835942 1225677 cri.go:89] found id: ""
	I1217 01:35:58.835951 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:58.836007 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.839615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:58.839689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:58.866788 1225677 cri.go:89] found id: ""
	I1217 01:35:58.866813 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.866823 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:58.866833 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:58.866866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:58.968702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:58.968738 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:58.989939 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:58.989967 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:59.058020 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:59.058046 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:59.058059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:59.088364 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:59.088394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:59.141100 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:59.141135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:59.232851 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:59.232891 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:59.262771 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:59.262800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:59.290187 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:59.290224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:59.339890 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:59.339924 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:59.422198 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:59.422236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:01.956538 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:01.967590 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:01.967660 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:02.007538 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.007575 1225677 cri.go:89] found id: ""
	I1217 01:36:02.007584 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:02.007670 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.012001 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:02.012136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:02.046710 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.046735 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.046741 1225677 cri.go:89] found id: ""
	I1217 01:36:02.046749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:02.046804 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.050667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.054450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:02.054546 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:02.081851 1225677 cri.go:89] found id: ""
	I1217 01:36:02.081880 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.081890 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:02.081897 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:02.081980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:02.112077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.112101 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.112106 1225677 cri.go:89] found id: ""
	I1217 01:36:02.112114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:02.112169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.116263 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.121396 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:02.121492 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:02.152376 1225677 cri.go:89] found id: ""
	I1217 01:36:02.152404 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.152497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:02.152523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:02.152642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:02.187133 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.187159 1225677 cri.go:89] found id: ""
	I1217 01:36:02.187168 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:02.187247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.191078 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:02.191173 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:02.220566 1225677 cri.go:89] found id: ""
	I1217 01:36:02.220593 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.220602 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:02.220611 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:02.220659 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.253992 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:02.254021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.304043 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:02.304077 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.350981 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:02.351020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.431358 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:02.431393 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.458269 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:02.458298 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:02.561780 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:02.561820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:02.582487 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:02.582522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:02.663558 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:02.663583 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:02.663596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.700536 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:02.700568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:02.775505 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:02.775547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.310734 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:05.322909 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:05.322985 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:05.350653 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.350738 1225677 cri.go:89] found id: ""
	I1217 01:36:05.350762 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:05.350819 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.355346 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:05.355461 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:05.385411 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:05.385439 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.385445 1225677 cri.go:89] found id: ""
	I1217 01:36:05.385453 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:05.385511 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.389761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.393387 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:05.393463 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:05.420412 1225677 cri.go:89] found id: ""
	I1217 01:36:05.420495 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.420505 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:05.420511 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:05.420569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:05.452034 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:05.452060 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.452066 1225677 cri.go:89] found id: ""
	I1217 01:36:05.452075 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:05.452131 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.456205 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.460128 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:05.460221 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:05.486956 1225677 cri.go:89] found id: ""
	I1217 01:36:05.486986 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.486995 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:05.487002 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:05.487063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:05.518138 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.518160 1225677 cri.go:89] found id: ""
	I1217 01:36:05.518169 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:05.518227 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.522038 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:05.522112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:05.552883 1225677 cri.go:89] found id: ""
	I1217 01:36:05.552951 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.552969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:05.552980 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:05.552994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.580975 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:05.581006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:05.677135 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:05.677178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:05.697133 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:05.697163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.725150 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:05.725181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.768358 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:05.768396 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.794846 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:05.794876 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:05.871841 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:05.871921 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.905951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:05.905982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:05.976460 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:05.976482 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:05.976495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:06.030179 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:06.030260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.614353 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:08.625446 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:08.625527 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:08.652272 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.652300 1225677 cri.go:89] found id: ""
	I1217 01:36:08.652309 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:08.652372 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.656164 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:08.656237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:08.682167 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.682186 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:08.682190 1225677 cri.go:89] found id: ""
	I1217 01:36:08.682198 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:08.682258 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.686632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.690338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:08.690409 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:08.717708 1225677 cri.go:89] found id: ""
	I1217 01:36:08.717732 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.717741 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:08.717748 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:08.717805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:08.754193 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.754217 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:08.754222 1225677 cri.go:89] found id: ""
	I1217 01:36:08.754229 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:08.754285 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.758295 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.761917 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:08.762011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:08.793723 1225677 cri.go:89] found id: ""
	I1217 01:36:08.793750 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.793761 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:08.793774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:08.793833 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:08.820995 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:08.821018 1225677 cri.go:89] found id: ""
	I1217 01:36:08.821027 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:08.821109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.824969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:08.825043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:08.850861 1225677 cri.go:89] found id: ""
	I1217 01:36:08.850896 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.850906 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:08.850917 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:08.850929 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:08.927540 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:08.927562 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:08.927576 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.953082 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:08.953110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.994744 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:08.994781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:09.027277 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:09.027305 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:09.056339 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:09.056367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:09.129785 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:09.129820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:09.161526 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:09.161607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:09.261869 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:09.261908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:09.282618 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:09.282652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:09.328912 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:09.328949 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:11.909228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:11.920145 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:11.920215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:11.953558 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:11.953581 1225677 cri.go:89] found id: ""
	I1217 01:36:11.953589 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:11.953643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.957221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:11.957293 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:11.984240 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:11.984263 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:11.984268 1225677 cri.go:89] found id: ""
	I1217 01:36:11.984276 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:11.984336 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.987996 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.991849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:11.991924 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:12.022066 1225677 cri.go:89] found id: ""
	I1217 01:36:12.022096 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.022106 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:12.022113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:12.022174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:12.058540 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.058563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.058569 1225677 cri.go:89] found id: ""
	I1217 01:36:12.058577 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:12.058629 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.063379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.067419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:12.067548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:12.095872 1225677 cri.go:89] found id: ""
	I1217 01:36:12.095900 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.095922 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:12.095929 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:12.095998 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:12.134836 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.134910 1225677 cri.go:89] found id: ""
	I1217 01:36:12.134933 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:12.135022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.139454 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:12.139524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:12.178455 1225677 cri.go:89] found id: ""
	I1217 01:36:12.178481 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.178491 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:12.178500 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:12.178538 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.215176 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:12.215204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:12.304978 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:12.305015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:12.342716 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:12.342745 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:12.444908 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:12.444945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:12.463288 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:12.463316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:12.536568 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:12.536589 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:12.536603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:12.576446 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:12.576479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.652969 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:12.653004 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.684862 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:12.684893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:12.713785 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:12.713815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:15.267669 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:15.282407 1225677 out.go:203] 
	W1217 01:36:15.285472 1225677 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 01:36:15.285518 1225677 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 01:36:15.285531 1225677 out.go:285] * Related issues:
	W1217 01:36:15.285545 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 01:36:15.285561 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 01:36:15.288521 1225677 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.00263192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.018401147Z" level=info msg="Created container 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62: kube-system/storage-provisioner/storage-provisioner" id=1949dc31-1f1c-4b50-a2e1-37b3fdbf1dae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.019096564Z" level=info msg="Starting container: 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62" id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.02762405Z" level=info msg="Started container" PID=1465 containerID=69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62 description=kube-system/storage-provisioner/storage-provisioner id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer sandboxID=201ec2eb9e7bac96947c26eb05eaeb60a6c9cb562fc7abd5b112bcffc3034df6
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.942366958Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946089951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.9461257Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946150479Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949691184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.94972877Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949750136Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953024484Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953060389Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953083707Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956843738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956882473Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.984628463Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=d06134a9-f254-4735-8afd-66ee773b0add name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.986619446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=64030ed7-d453-4dae-a62d-31943ce0a699 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988074458Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988182542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.010661643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.011529823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.034308469Z" level=info msg="Created container bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.036802709Z" level=info msg="Starting container: bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee" id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.042056225Z" level=info msg="Started container" PID=1514 containerID=bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee description=kube-system/kube-controller-manager-ha-202151/kube-controller-manager id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5021c181f938b38114a133bf254586f8ff5e1e22eea40c87bb44019760307250
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	bbbccca1f1945       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   5 minutes ago       Running             kube-controller-manager   7                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	69c29e5195bd5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       7                   201ec2eb9e7ba       storage-provisioner                 kube-system
	3345ee69cef2f       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Exited              kube-controller-manager   6                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	e2674511b7c44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       6                   201ec2eb9e7ba       storage-provisioner                 kube-system
	5b41f976d94aa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   7991c76c60a45       coredns-66bc5c9577-km6lq            kube-system
	f78b81e996c76       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   b40c6af808cd2       busybox-7b57f96db7-hw4rm            default
	4f3ffacfcf52c       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   7 minutes ago       Running             kube-proxy                2                   db6cac339dafd       kube-proxy-5gdc5                    kube-system
	cc242e356e74c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   416ecd7d82605       coredns-66bc5c9577-4s6qf            kube-system
	421b902e0a04a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   0059b57d997fb       kindnet-7b5wx                       kube-system
	9deff052e5328       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Running             etcd                      2                   cdd6d86a58561       etcd-ha-202151                      kube-system
	b08781420f13d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   7 minutes ago       Running             kube-apiserver            3                   55c73e3aeca0b       kube-apiserver-ha-202151            kube-system
	d2d094f7ce12d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            2                   9fa81adaf2298       kube-scheduler-ha-202151            kube-system
	f70584959dd02       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  2                   5cb308ab59abd       kube-vip-ha-202151                  kube-system
	
	
	==> coredns [5b41f976d94aab2a66d015407415d4106cf8778628764f4904a5062779241af6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cc242e356e74c1c82ae80013999351dff6fb19a83d4a91a90cd125e034418779] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-202151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T01_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:36:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:33:48 +0000   Wed, 17 Dec 2025 01:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-202151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                7edb1e1f-1b17-415f-9229-48ba3527eefe
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hw4rm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-4s6qf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 coredns-66bc5c9577-km6lq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 etcd-ha-202151                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         23m
	  kube-system                 kindnet-7b5wx                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-ha-202151             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-202151    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-5gdc5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-202151             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-202151                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m41s                  kube-proxy       
	  Normal   Starting                 9m45s                  kube-proxy       
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  23m (x8 over 23m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     23m (x8 over 23m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    23m (x8 over 23m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23m                    kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   Starting                 23m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 23m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  23m                    kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m                    kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           23m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           22m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeReady                22m                    kubelet          Node ha-202151 status is now: NodeReady
	  Normal   RegisteredNode           21m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m44s                  node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           9m43s                  node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           9m9s                   node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   Starting                 7m54s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     7m53s (x8 over 7m54s)  kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m53s (x8 over 7m54s)  kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m53s (x8 over 7m54s)  kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           5m11s                  node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	
	
	Name:               ha-202151-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_13_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:13:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-202151-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                04eb29d0-5ea5-46d1-ae46-afe3ee374602
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rz794                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 etcd-ha-202151-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-nt6qx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-ha-202151-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-202151-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-hp525                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-202151-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-202151-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9m30s              kube-proxy       
	  Normal   Starting                 22m                kube-proxy       
	  Normal   RegisteredNode           22m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           22m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             18m                node-controller  Node ha-202151-m02 status is now: NodeNotReady
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-202151-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           9m44s              node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           9m43s              node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           9m9s               node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           5m11s              node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             4m21s              node-controller  Node ha-202151-m02 status is now: NodeNotReady
	
	
	Name:               ha-202151-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:16:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-202151-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                84c842f9-c3a2-4245-b176-e32c4cbe3e2c
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2d7p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kindnet-cntp7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-proxy-kqgdw            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m56s                  kube-proxy       
	  Normal   Starting                 20m                    kube-proxy       
	  Normal   NodeHasSufficientPID     20m (x3 over 20m)      kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     20m                    cidrAllocator    Node ha-202151-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           20m                    node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeHasSufficientMemory  20m (x3 over 20m)      kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x3 over 20m)      kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m                    node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           20m                    node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeReady                19m                    kubelet          Node ha-202151-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m44s                  node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           9m43s                  node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   Starting                 9m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m15s (x8 over 9m18s)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m15s (x8 over 9m18s)  kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m15s (x8 over 9m18s)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m9s                   node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           5m11s                  node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeNotReady             4m21s                  node-controller  Node ha-202151-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec16 23:59] overlayfs: idmapped layers are currently not supported
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	[Dec17 01:12] overlayfs: idmapped layers are currently not supported
	[Dec17 01:13] overlayfs: idmapped layers are currently not supported
	[Dec17 01:14] overlayfs: idmapped layers are currently not supported
	[Dec17 01:16] overlayfs: idmapped layers are currently not supported
	[Dec17 01:17] overlayfs: idmapped layers are currently not supported
	[Dec17 01:19] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:26] overlayfs: idmapped layers are currently not supported
	[  +3.428919] overlayfs: idmapped layers are currently not supported
	[ +34.914517] overlayfs: idmapped layers are currently not supported
	[Dec17 01:27] overlayfs: idmapped layers are currently not supported
	[Dec17 01:28] overlayfs: idmapped layers are currently not supported
	[  +3.208371] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c] <==
	{"level":"info","ts":"2025-12-17T01:30:14.263103Z","caller":"traceutil/trace.go:172","msg":"trace[949367018] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-4s6qf; range_end:; response_count:1; response_revision:3412; }","duration":"104.813001ms","start":"2025-12-17T01:30:14.158280Z","end":"2025-12-17T01:30:14.263093Z","steps":["trace[949367018] 'agreement among raft nodes before linearized reading'  (duration: 104.719317ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.263447Z","caller":"traceutil/trace.go:172","msg":"trace[1053355413] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:3412; }","duration":"105.36858ms","start":"2025-12-17T01:30:14.158070Z","end":"2025-12-17T01:30:14.263439Z","steps":["trace[1053355413] 'agreement among raft nodes before linearized reading'  (duration: 105.337525ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.263678Z","caller":"traceutil/trace.go:172","msg":"trace[1642161222] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:3412; }","duration":"105.615171ms","start":"2025-12-17T01:30:14.158056Z","end":"2025-12-17T01:30:14.263671Z","steps":["trace[1642161222] 'agreement among raft nodes before linearized reading'  (duration: 105.574171ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.263977Z","caller":"traceutil/trace.go:172","msg":"trace[1375962484] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:3412; }","duration":"105.938134ms","start":"2025-12-17T01:30:14.158032Z","end":"2025-12-17T01:30:14.263970Z","steps":["trace[1375962484] 'agreement among raft nodes before linearized reading'  (duration: 105.887731ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264496Z","caller":"traceutil/trace.go:172","msg":"trace[240166330] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:3412; }","duration":"106.723615ms","start":"2025-12-17T01:30:14.157763Z","end":"2025-12-17T01:30:14.264487Z","steps":["trace[240166330] 'agreement among raft nodes before linearized reading'  (duration: 106.683485ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264648Z","caller":"traceutil/trace.go:172","msg":"trace[801862479] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:3412; }","duration":"106.901646ms","start":"2025-12-17T01:30:14.157741Z","end":"2025-12-17T01:30:14.264642Z","steps":["trace[801862479] 'agreement among raft nodes before linearized reading'  (duration: 106.880281ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264713Z","caller":"traceutil/trace.go:172","msg":"trace[1298748005] range","detail":"{range_begin:/registry/resourceslices/; range_end:/registry/resourceslices0; response_count:0; response_revision:3412; }","duration":"106.990711ms","start":"2025-12-17T01:30:14.157718Z","end":"2025-12-17T01:30:14.264709Z","steps":["trace[1298748005] 'agreement among raft nodes before linearized reading'  (duration: 106.971667ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.264867Z","caller":"traceutil/trace.go:172","msg":"trace[1872430785] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:4; response_revision:3412; }","duration":"107.168462ms","start":"2025-12-17T01:30:14.157694Z","end":"2025-12-17T01:30:14.264862Z","steps":["trace[1872430785] 'agreement among raft nodes before linearized reading'  (duration: 107.100657ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265008Z","caller":"traceutil/trace.go:172","msg":"trace[546890442] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:2; response_revision:3412; }","duration":"107.336868ms","start":"2025-12-17T01:30:14.157666Z","end":"2025-12-17T01:30:14.265003Z","steps":["trace[546890442] 'agreement among raft nodes before linearized reading'  (duration: 107.286695ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265113Z","caller":"traceutil/trace.go:172","msg":"trace[160706393] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:3412; }","duration":"107.464209ms","start":"2025-12-17T01:30:14.157644Z","end":"2025-12-17T01:30:14.265109Z","steps":["trace[160706393] 'agreement among raft nodes before linearized reading'  (duration: 107.444935ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265358Z","caller":"traceutil/trace.go:172","msg":"trace[60800954] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:59; response_revision:3412; }","duration":"107.734782ms","start":"2025-12-17T01:30:14.157618Z","end":"2025-12-17T01:30:14.265353Z","steps":["trace[60800954] 'agreement among raft nodes before linearized reading'  (duration: 107.570996ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265491Z","caller":"traceutil/trace.go:172","msg":"trace[531992615] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:3412; }","duration":"107.945895ms","start":"2025-12-17T01:30:14.157540Z","end":"2025-12-17T01:30:14.265486Z","steps":["trace[531992615] 'agreement among raft nodes before linearized reading'  (duration: 107.925047ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265658Z","caller":"traceutil/trace.go:172","msg":"trace[1390021984] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:21; response_revision:3412; }","duration":"117.155252ms","start":"2025-12-17T01:30:14.148497Z","end":"2025-12-17T01:30:14.265652Z","steps":["trace[1390021984] 'agreement among raft nodes before linearized reading'  (duration: 117.062208ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265769Z","caller":"traceutil/trace.go:172","msg":"trace[679095335] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:3412; }","duration":"117.281746ms","start":"2025-12-17T01:30:14.148481Z","end":"2025-12-17T01:30:14.265763Z","steps":["trace[679095335] 'agreement among raft nodes before linearized reading'  (duration: 117.263704ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265857Z","caller":"traceutil/trace.go:172","msg":"trace[372052167] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:3412; }","duration":"117.442719ms","start":"2025-12-17T01:30:14.148409Z","end":"2025-12-17T01:30:14.265852Z","steps":["trace[372052167] 'agreement among raft nodes before linearized reading'  (duration: 117.422576ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.265911Z","caller":"traceutil/trace.go:172","msg":"trace[1549549526] range","detail":"{range_begin:/registry/resourceslices; range_end:; response_count:0; response_revision:3412; }","duration":"117.523661ms","start":"2025-12-17T01:30:14.148383Z","end":"2025-12-17T01:30:14.265907Z","steps":["trace[1549549526] 'agreement among raft nodes before linearized reading'  (duration: 117.510049ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266022Z","caller":"traceutil/trace.go:172","msg":"trace[665321617] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:4; response_revision:3412; }","duration":"117.650928ms","start":"2025-12-17T01:30:14.148366Z","end":"2025-12-17T01:30:14.266017Z","steps":["trace[665321617] 'agreement among raft nodes before linearized reading'  (duration: 117.604168ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266120Z","caller":"traceutil/trace.go:172","msg":"trace[1222872720] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:3412; }","duration":"117.770604ms","start":"2025-12-17T01:30:14.148345Z","end":"2025-12-17T01:30:14.266115Z","steps":["trace[1222872720] 'agreement among raft nodes before linearized reading'  (duration: 117.753686ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266201Z","caller":"traceutil/trace.go:172","msg":"trace[1508353187] range","detail":"{range_begin:/registry/endpointslices; range_end:; response_count:0; response_revision:3412; }","duration":"117.875611ms","start":"2025-12-17T01:30:14.148322Z","end":"2025-12-17T01:30:14.266197Z","steps":["trace[1508353187] 'agreement among raft nodes before linearized reading'  (duration: 117.858676ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266280Z","caller":"traceutil/trace.go:172","msg":"trace[2115891653] range","detail":"{range_begin:/registry/csistoragecapacities; range_end:; response_count:0; response_revision:3412; }","duration":"117.996568ms","start":"2025-12-17T01:30:14.148279Z","end":"2025-12-17T01:30:14.266275Z","steps":["trace[2115891653] 'agreement among raft nodes before linearized reading'  (duration: 117.980987ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266366Z","caller":"traceutil/trace.go:172","msg":"trace[468403184] range","detail":"{range_begin:/registry/clusterrolebindings; range_end:; response_count:0; response_revision:3412; }","duration":"118.102411ms","start":"2025-12-17T01:30:14.148259Z","end":"2025-12-17T01:30:14.266361Z","steps":["trace[468403184] 'agreement among raft nodes before linearized reading'  (duration: 118.084738ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266493Z","caller":"traceutil/trace.go:172","msg":"trace[2046334447] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:7; response_revision:3412; }","duration":"118.248303ms","start":"2025-12-17T01:30:14.148241Z","end":"2025-12-17T01:30:14.266489Z","steps":["trace[2046334447] 'agreement among raft nodes before linearized reading'  (duration: 118.18643ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266598Z","caller":"traceutil/trace.go:172","msg":"trace[230986433] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:3412; }","duration":"118.372953ms","start":"2025-12-17T01:30:14.148220Z","end":"2025-12-17T01:30:14.266593Z","steps":["trace[230986433] 'agreement among raft nodes before linearized reading'  (duration: 118.353868ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266682Z","caller":"traceutil/trace.go:172","msg":"trace[1301493726] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:3412; }","duration":"118.481643ms","start":"2025-12-17T01:30:14.148196Z","end":"2025-12-17T01:30:14.266678Z","steps":["trace[1301493726] 'agreement among raft nodes before linearized reading'  (duration: 118.465537ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T01:30:14.266784Z","caller":"traceutil/trace.go:172","msg":"trace[922218029] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:11; response_revision:3412; }","duration":"118.706442ms","start":"2025-12-17T01:30:14.148074Z","end":"2025-12-17T01:30:14.266780Z","steps":["trace[922218029] 'agreement among raft nodes before linearized reading'  (duration: 118.64086ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:36:24 up  7:18,  0 user,  load average: 0.73, 1.32, 1.51
	Linux ha-202151 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [421b902e0a04a8b9de33dba40eff9de2915e948b549831a023a55f14ab43a351] <==
	I1217 01:35:41.944972       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:35:51.945291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:35:51.945466       1 main.go:301] handling current node
	I1217 01:35:51.945509       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:35:51.945541       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:35:51.945702       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:35:51.945745       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:36:01.941685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:36:01.941720       1 main.go:301] handling current node
	I1217 01:36:01.941737       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:36:01.941744       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:36:01.941941       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:36:01.941956       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:36:11.945404       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:36:11.945503       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:36:11.945683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:36:11.945723       1 main.go:301] handling current node
	I1217 01:36:11.945774       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:36:11.945806       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:36:21.945230       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:36:21.945265       1 main.go:301] handling current node
	I1217 01:36:21.945281       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:36:21.945287       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:36:21.945435       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:36:21.945442       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43] <==
	{"level":"warn","ts":"2025-12-17T01:30:14.097955Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a885a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098017Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002e254a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098226Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098431Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c61680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098550Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a21e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098649Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002813860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098715Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021443c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100260Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002913860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100450Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002114960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001752b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002912d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.101157Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a9c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.108687Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1217 01:30:14.109232       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1217 01:30:14.109341       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111281       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111377       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.112738       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.651626ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	{"level":"warn","ts":"2025-12-17T01:30:14.178037Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000eec000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I1217 01:30:20.949098       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1217 01:30:43.911399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1217 01:31:13.533495       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 01:32:03.642642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 01:32:03.692026       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317] <==
	I1217 01:30:11.991091       1 serving.go:386] Generated self-signed cert in-memory
	I1217 01:30:13.217832       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1217 01:30:13.217864       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:13.219443       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 01:30:13.219569       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 01:30:13.220274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 01:30:13.220329       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 01:30:24.189762       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee] <==
	E1217 01:31:33.506016       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506049       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506057       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506063       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:33.506069       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506199       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506340       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506373       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506405       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506437       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	I1217 01:31:53.524733       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.571989       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.572097       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.606958       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.607067       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646154       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646268       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695195       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695310       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742527       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742634       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785957       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785994       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:31:53.833471       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:32:03.448660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-202151-m04"
	
	
	==> kube-proxy [4f3ffacfcf52c27d4a48be1c9762e97d9c8b2f9eff204b9108c451da8b2defab] <==
	E1217 01:28:51.112803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:58.124554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:10.248785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:26.153294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:30:07.912871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1217 01:30:42.899769       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 01:30:42.899808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 01:30:42.899895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 01:30:42.921440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 01:30:42.921510       1 server_linux.go:132] "Using iptables Proxier"
	I1217 01:30:42.927648       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 01:30:42.928009       1 server.go:527] "Version info" version="v1.34.2"
	I1217 01:30:42.928034       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:42.931509       1 config.go:106] "Starting endpoint slice config controller"
	I1217 01:30:42.931589       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 01:30:42.931909       1 config.go:200] "Starting service config controller"
	I1217 01:30:42.931953       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 01:30:42.932968       1 config.go:309] "Starting node config controller"
	I1217 01:30:42.932995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 01:30:42.933003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 01:30:42.933332       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 01:30:42.933352       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 01:30:43.031859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 01:30:43.032046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 01:30:43.033393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e] <==
	E1217 01:28:38.924937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:38.925147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:38.925091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:38.925212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:38.925293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:39.827962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:39.828496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 01:28:39.945026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:39.947443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 01:28:40.059965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 01:28:40.060779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:40.088703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:40.109776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 01:28:40.129468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:40.134968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 01:28:40.195130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 01:28:40.254624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 01:28:40.281191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 01:28:40.314175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 01:28:40.347761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 01:28:40.381360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 01:28:40.463231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 01:28:40.490812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 01:28:40.517370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1217 01:28:41.991837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 01:29:56 ha-202151 kubelet[802]: I1217 01:29:56.984304     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:29:56 ha-202151 kubelet[802]: E1217 01:29:56.984531     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:01 ha-202151 kubelet[802]: E1217 01:30:01.439578     802 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-202151)" interval="400ms"
	Dec 17 01:30:02 ha-202151 kubelet[802]: E1217 01:30:02.001281     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:10 ha-202151 kubelet[802]: I1217 01:30:10.983522     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:11 ha-202151 kubelet[802]: E1217 01:30:11.841503     802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	Dec 17 01:30:12 ha-202151 kubelet[802]: E1217 01:30:12.002934     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.438401     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.439109     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:24 ha-202151 kubelet[802]: E1217 01:30:24.439355     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.449813     802 scope.go:117] "RemoveContainer" containerID="61c769055e2e33178655adbc6de856c58722cb4c70738c4d94a535d730bf75c6"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.450264     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:27 ha-202151 kubelet[802]: E1217 01:30:27.450420     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:29 ha-202151 kubelet[802]: I1217 01:30:29.966353     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:29 ha-202151 kubelet[802]: E1217 01:30:29.966538     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:34 ha-202151 kubelet[802]: I1217 01:30:34.175661     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:34 ha-202151 kubelet[802]: E1217 01:30:34.175845     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:38 ha-202151 kubelet[802]: I1217 01:30:38.984627     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:38 ha-202151 kubelet[802]: E1217 01:30:38.985748     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:47 ha-202151 kubelet[802]: I1217 01:30:47.984399     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:47 ha-202151 kubelet[802]: E1217 01:30:47.984633     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:52 ha-202151 kubelet[802]: I1217 01:30:52.985253     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:58 ha-202151 kubelet[802]: I1217 01:30:58.984851     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:58 ha-202151 kubelet[802]: E1217 01:30:58.985050     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:31:09 ha-202151 kubelet[802]: I1217 01:31:09.983912     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-202151 -n ha-202151
helpers_test.go:270: (dbg) Run:  kubectl --context ha-202151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (6.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 node add --control-plane --alsologtostderr -v 5
E1217 01:36:45.353651 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 node add --control-plane --alsologtostderr -v 5: (1m20.91460457s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5: exit status 7 (885.789061ms)

                                                
                                                
-- stdout --
	ha-202151
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-202151-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-202151-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	ha-202151-m05
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:37:48.324082 1244921 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:37:48.324213 1244921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:37:48.324225 1244921 out.go:374] Setting ErrFile to fd 2...
	I1217 01:37:48.324230 1244921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:37:48.324496 1244921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:37:48.324697 1244921 out.go:368] Setting JSON to false
	I1217 01:37:48.324730 1244921 mustload.go:66] Loading cluster: ha-202151
	I1217 01:37:48.324890 1244921 notify.go:221] Checking for updates...
	I1217 01:37:48.325209 1244921 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:37:48.325236 1244921 status.go:174] checking status of ha-202151 ...
	I1217 01:37:48.325798 1244921 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:37:48.347418 1244921 status.go:371] ha-202151 host status = "Running" (err=<nil>)
	I1217 01:37:48.347443 1244921 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:37:48.347783 1244921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:37:48.380583 1244921 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:37:48.380997 1244921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:37:48.381071 1244921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:37:48.402256 1244921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:37:48.517442 1244921 ssh_runner.go:195] Run: systemctl --version
	I1217 01:37:48.524105 1244921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:37:48.541158 1244921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:37:48.620660 1244921 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-17 01:37:48.607614088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:37:48.621232 1244921 kubeconfig.go:125] found "ha-202151" server: "https://192.168.49.254:8443"
	I1217 01:37:48.621272 1244921 api_server.go:166] Checking apiserver status ...
	I1217 01:37:48.621314 1244921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:48.636393 1244921 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/954/cgroup
	I1217 01:37:48.644961 1244921 api_server.go:182] apiserver freezer: "4:freezer:/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio/crio-b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43"
	I1217 01:37:48.645029 1244921 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio/crio-b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43/freezer.state
	I1217 01:37:48.656782 1244921 api_server.go:204] freezer state: "THAWED"
	I1217 01:37:48.656808 1244921 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 01:37:48.666300 1244921 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 01:37:48.666327 1244921 status.go:463] ha-202151 apiserver status = Running (err=<nil>)
	I1217 01:37:48.666343 1244921 status.go:176] ha-202151 status: &{Name:ha-202151 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:37:48.666358 1244921 status.go:174] checking status of ha-202151-m02 ...
	I1217 01:37:48.666658 1244921 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:37:48.690227 1244921 status.go:371] ha-202151-m02 host status = "Running" (err=<nil>)
	I1217 01:37:48.690250 1244921 host.go:66] Checking if "ha-202151-m02" exists ...
	I1217 01:37:48.690574 1244921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:37:48.720579 1244921 host.go:66] Checking if "ha-202151-m02" exists ...
	I1217 01:37:48.720880 1244921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:37:48.720920 1244921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:37:48.747653 1244921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:37:48.850044 1244921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:37:48.868665 1244921 kubeconfig.go:125] found "ha-202151" server: "https://192.168.49.254:8443"
	I1217 01:37:48.868695 1244921 api_server.go:166] Checking apiserver status ...
	I1217 01:37:48.868739 1244921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 01:37:48.880998 1244921 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:37:48.881067 1244921 status.go:463] ha-202151-m02 apiserver status = Running (err=<nil>)
	I1217 01:37:48.881084 1244921 status.go:176] ha-202151-m02 status: &{Name:ha-202151-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:37:48.881114 1244921 status.go:174] checking status of ha-202151-m04 ...
	I1217 01:37:48.881457 1244921 cli_runner.go:164] Run: docker container inspect ha-202151-m04 --format={{.State.Status}}
	I1217 01:37:48.901108 1244921 status.go:371] ha-202151-m04 host status = "Stopped" (err=<nil>)
	I1217 01:37:48.901141 1244921 status.go:384] host is not running, skipping remaining checks
	I1217 01:37:48.901150 1244921 status.go:176] ha-202151-m04 status: &{Name:ha-202151-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:37:48.901170 1244921 status.go:174] checking status of ha-202151-m05 ...
	I1217 01:37:48.901513 1244921 cli_runner.go:164] Run: docker container inspect ha-202151-m05 --format={{.State.Status}}
	I1217 01:37:48.936360 1244921 status.go:371] ha-202151-m05 host status = "Running" (err=<nil>)
	I1217 01:37:48.936402 1244921 host.go:66] Checking if "ha-202151-m05" exists ...
	I1217 01:37:48.936926 1244921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m05
	I1217 01:37:48.956588 1244921 host.go:66] Checking if "ha-202151-m05" exists ...
	I1217 01:37:48.956927 1244921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:37:48.956970 1244921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m05
	I1217 01:37:48.975069 1244921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33968 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m05/id_rsa Username:docker}
	I1217 01:37:49.078263 1244921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:37:49.092554 1244921 kubeconfig.go:125] found "ha-202151" server: "https://192.168.49.254:8443"
	I1217 01:37:49.092585 1244921 api_server.go:166] Checking apiserver status ...
	I1217 01:37:49.092639 1244921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:37:49.104683 1244921 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1212/cgroup
	I1217 01:37:49.113849 1244921 api_server.go:182] apiserver freezer: "4:freezer:/docker/face59dacb1b5ac9f571f827e8134428af029dbe48b9594e2a203771e140b907/crio/crio-96001cc9f1304cae795df68ed13f24fdb3ef41e6cf0767fa3b581731b50f330c"
	I1217 01:37:49.113987 1244921 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/face59dacb1b5ac9f571f827e8134428af029dbe48b9594e2a203771e140b907/crio/crio-96001cc9f1304cae795df68ed13f24fdb3ef41e6cf0767fa3b581731b50f330c/freezer.state
	I1217 01:37:49.126831 1244921 api_server.go:204] freezer state: "THAWED"
	I1217 01:37:49.126905 1244921 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 01:37:49.135384 1244921 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 01:37:49.135415 1244921 status.go:463] ha-202151-m05 apiserver status = Running (err=<nil>)
	I1217 01:37:49.135425 1244921 status.go:176] ha-202151-m05 status: &{Name:ha-202151-m05 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:615: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-202151
helpers_test.go:244: (dbg) docker inspect ha-202151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	        "Created": "2025-12-17T01:12:34.697109094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1225803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:28:24.223784082Z",
	            "FinishedAt": "2025-12-17T01:28:23.510213695Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d-json.log",
	        "Name": "/ha-202151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-202151:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-202151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	                "LowerDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-202151",
	                "Source": "/var/lib/docker/volumes/ha-202151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-202151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-202151",
	                "name.minikube.sigs.k8s.io": "ha-202151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a8bfe290f37deb1c3104d9ab559bda078e71c5706919642a39ad4ea7fcab4f9",
	            "SandboxKey": "/var/run/docker/netns/1a8bfe290f37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33961"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-202151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:fe:96:8f:04:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e224ccab4890fdef242aee82a08ae93dfe44ddd1860f17db152892136a611dec",
	                    "EndpointID": "d9f94b3340492bc0b924fd0e2620aaaaec200a88061066241297f013a7336f77",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-202151",
	                        "0d1af93acb20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-202151 -n ha-202151
helpers_test.go:253: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 logs -n 25: (2.371714782s)
helpers_test.go:261: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp testdata/cp-test.txt ha-202151-m04:/home/docker/cp-test.txt                                                             │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m04.txt │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m04_ha-202151.txt                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151.txt                                                 │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m02 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node start m02 --alsologtostderr -v 5                                                                                      │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │                     │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │                     │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:25 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5                                                                                   │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:27 UTC │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │                     │
	│ node    │ ha-202151 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:27 UTC │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:28 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:28 UTC │                     │
	│ node    │ ha-202151 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:36 UTC │ 17 Dec 25 01:37 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:28:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:28:23.957919 1225677 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:28:23.958241 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958276 1225677 out.go:374] Setting ErrFile to fd 2...
	I1217 01:28:23.958300 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958577 1225677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:28:23.958999 1225677 out.go:368] Setting JSON to false
	I1217 01:28:23.959883 1225677 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25854,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:28:23.959981 1225677 start.go:143] virtualization:  
	I1217 01:28:23.963109 1225677 out.go:179] * [ha-202151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:28:23.966861 1225677 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:28:23.967008 1225677 notify.go:221] Checking for updates...
	I1217 01:28:23.972825 1225677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:28:23.975704 1225677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:23.978560 1225677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:28:23.981565 1225677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:28:23.984558 1225677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:28:23.987973 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:23.988577 1225677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:28:24.018679 1225677 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:28:24.018817 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.078613 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.06901697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.078731 1225677 docker.go:319] overlay module found
	I1217 01:28:24.081724 1225677 out.go:179] * Using the docker driver based on existing profile
	I1217 01:28:24.084659 1225677 start.go:309] selected driver: docker
	I1217 01:28:24.084679 1225677 start.go:927] validating driver "docker" against &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.084825 1225677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:28:24.084933 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.139102 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.130176461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.139528 1225677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:28:24.139560 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:24.139616 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:24.139662 1225677 start.go:353] cluster config:
	{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.142829 1225677 out.go:179] * Starting "ha-202151" primary control-plane node in "ha-202151" cluster
	I1217 01:28:24.145513 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:24.148343 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:24.151136 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:24.151182 1225677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 01:28:24.151172 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:24.151191 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:24.151281 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:24.151292 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:24.151447 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.170893 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:24.170917 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:24.170932 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:24.170962 1225677 start.go:360] acquireMachinesLock for ha-202151: {Name:mk96d245790ddb7861f0cddd8ac09eba6d29a858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:24.171020 1225677 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-202151"
	I1217 01:28:24.171043 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:24.171052 1225677 fix.go:54] fixHost starting: 
	I1217 01:28:24.171312 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.188404 1225677 fix.go:112] recreateIfNeeded on ha-202151: state=Stopped err=<nil>
	W1217 01:28:24.188458 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:24.191811 1225677 out.go:252] * Restarting existing docker container for "ha-202151" ...
	I1217 01:28:24.191909 1225677 cli_runner.go:164] Run: docker start ha-202151
	I1217 01:28:24.438707 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.459881 1225677 kic.go:430] container "ha-202151" state is running.
	I1217 01:28:24.460741 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:24.487033 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.487599 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:24.487676 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:24.511372 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:24.513726 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:24.513748 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:24.516008 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:28:27.648958 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.648981 1225677 ubuntu.go:182] provisioning hostname "ha-202151"
	I1217 01:28:27.649043 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.671053 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.671376 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.671387 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151 && echo "ha-202151" | sudo tee /etc/hostname
	I1217 01:28:27.816001 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.816128 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.833557 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.833865 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.833885 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:27.968607 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:27.968638 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:27.968669 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:27.968686 1225677 provision.go:84] configureAuth start
	I1217 01:28:27.968751 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:27.986183 1225677 provision.go:143] copyHostCerts
	I1217 01:28:27.986244 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986288 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:27.986301 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986379 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:27.986471 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986493 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:27.986502 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986530 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:27.986576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986601 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:27.986609 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986637 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:27.986687 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151 san=[127.0.0.1 192.168.49.2 ha-202151 localhost minikube]
	I1217 01:28:28.161966 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:28.162074 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:28.162136 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.180162 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.276314 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:28.276374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:28.294399 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:28.294463 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1217 01:28:28.312546 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:28.312611 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:28.329872 1225677 provision.go:87] duration metric: took 361.168151ms to configureAuth
	I1217 01:28:28.329900 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:28.330141 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:28.330260 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.347687 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:28.348017 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:28.348037 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:28.719002 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:28.719025 1225677 machine.go:97] duration metric: took 4.231409969s to provisionDockerMachine
	I1217 01:28:28.719036 1225677 start.go:293] postStartSetup for "ha-202151" (driver="docker")
	I1217 01:28:28.719047 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:28.719106 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:28.719158 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.741197 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.836254 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:28.839569 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:28.839599 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:28.839611 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:28.839667 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:28.839747 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:28.839758 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:28.839856 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:28.847310 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:28.864518 1225677 start.go:296] duration metric: took 145.466453ms for postStartSetup
	I1217 01:28:28.864667 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:28.864709 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.882572 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.974073 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:28.979262 1225677 fix.go:56] duration metric: took 4.808204011s for fixHost
	I1217 01:28:28.979289 1225677 start.go:83] releasing machines lock for "ha-202151", held for 4.808256014s
	I1217 01:28:28.979366 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:29.000545 1225677 ssh_runner.go:195] Run: cat /version.json
	I1217 01:28:29.000593 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:29.000605 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.000678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.017863 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.030045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.205586 1225677 ssh_runner.go:195] Run: systemctl --version
	I1217 01:28:29.212211 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:29.247878 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:29.252247 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:29.252372 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:29.260987 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:29.261012 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:29.261044 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:29.261091 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:29.276500 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:29.289977 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:29.290113 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:29.306150 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:29.319359 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:29.442260 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:29.554130 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:29.554229 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:29.569409 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:29.582225 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:29.693269 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:29.815821 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:29.829762 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:29.843587 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:29.843675 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.852929 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:29.853026 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.862094 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.870988 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.879860 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:29.888714 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.897427 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.906242 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.915392 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:29.923247 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:29.930867 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.085763 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:28:30.268466 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:28:30.268540 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:28:30.272645 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:28:30.272717 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:28:30.276359 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:28:30.302094 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:28:30.302194 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.329875 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.364988 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:28:30.367851 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:28:30.383155 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:28:30.387105 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.397488 1225677 kubeadm.go:884] updating cluster {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:28:30.397642 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:30.397701 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.434465 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.434490 1225677 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:28:30.434546 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.461597 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.461622 1225677 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:28:30.461631 1225677 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 01:28:30.461733 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:28:30.461815 1225677 ssh_runner.go:195] Run: crio config
	I1217 01:28:30.524993 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:30.525016 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:30.525041 1225677 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:28:30.525063 1225677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-202151 NodeName:ha-202151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:28:30.525197 1225677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-202151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:28:30.525219 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:28:30.525269 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:28:30.537247 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:30.537359 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:28:30.537423 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:28:30.545256 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:28:30.545330 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 01:28:30.553189 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 01:28:30.566160 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:28:30.579061 1225677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 01:28:30.591667 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:28:30.604079 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:28:30.607859 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.617660 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.737827 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:28:30.755642 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.2
	I1217 01:28:30.755663 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:28:30.755694 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:30.755839 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:28:30.755906 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:28:30.755919 1225677 certs.go:257] generating profile certs ...
	I1217 01:28:30.755998 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:28:30.756031 1225677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698
	I1217 01:28:30.756050 1225677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1217 01:28:31.070955 1225677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 ...
	I1217 01:28:31.071062 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698: {Name:mke1b333e19e123d757f2361ffab64b3ce630ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071323 1225677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 ...
	I1217 01:28:31.071369 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698: {Name:mk12d8ef8dbb1ef8ff84c5ba8c83b430a9515230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071553 1225677 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:28:31.071777 1225677 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:28:31.071982 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:28:31.072020 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:28:31.072053 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:28:31.072099 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:28:31.072142 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:28:31.072179 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:28:31.072222 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:28:31.072260 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:28:31.072291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:28:31.072379 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:28:31.072496 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:28:31.072540 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:28:31.072623 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:28:31.072699 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:28:31.072755 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:28:31.072888 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:31.072995 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.073038 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.073074 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.073717 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:28:31.098054 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:28:31.121354 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:28:31.140746 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:28:31.159713 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:28:31.178284 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:28:31.196338 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:28:31.214382 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:28:31.231910 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:28:31.249283 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:28:31.267150 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:28:31.284464 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:28:31.297370 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:28:31.303511 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.310796 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:28:31.318435 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322279 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322380 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.363578 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:28:31.371139 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.378596 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:28:31.385983 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389802 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389911 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.449546 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:28:31.463605 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.474127 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:28:31.484475 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489596 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489713 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.551435 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:28:31.559450 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:28:31.573170 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:28:31.639157 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:28:31.715122 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:28:31.783477 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:28:31.844822 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:28:31.905215 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:28:31.967945 1225677 kubeadm.go:401] StartCluster: {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:31.968163 1225677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:28:31.968241 1225677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:28:32.018626 1225677 cri.go:89] found id: "9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c"
	I1217 01:28:32.018691 1225677 cri.go:89] found id: "b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43"
	I1217 01:28:32.018711 1225677 cri.go:89] found id: "d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e"
	I1217 01:28:32.018735 1225677 cri.go:89] found id: "f70584959dd02aedc5247d28de369b3dfbec762797364a5b46746119bcd380ba"
	I1217 01:28:32.018753 1225677 cri.go:89] found id: "82cc4882889dc4d930d89f36ac77114d0161f4172216bc47431b8697c0630be5"
	I1217 01:28:32.018781 1225677 cri.go:89] found id: ""
	I1217 01:28:32.018853 1225677 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 01:28:32.044061 1225677 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T01:28:32Z" level=error msg="open /run/runc: no such file or directory"
	I1217 01:28:32.044185 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:28:32.052950 1225677 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:28:32.053010 1225677 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:28:32.053080 1225677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:28:32.061188 1225677 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:32.061654 1225677 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-202151" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.061797 1225677 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-202151" cluster setting kubeconfig missing "ha-202151" context setting]
	I1217 01:28:32.062106 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.062698 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:28:32.063465 1225677 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:28:32.063546 1225677 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:28:32.063583 1225677 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:28:32.063613 1225677 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:28:32.063651 1225677 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:28:32.063976 1225677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:28:32.063525 1225677 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 01:28:32.081817 1225677 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 01:28:32.081837 1225677 kubeadm.go:602] duration metric: took 28.80443ms to restartPrimaryControlPlane
	I1217 01:28:32.081846 1225677 kubeadm.go:403] duration metric: took 113.913079ms to StartCluster
	I1217 01:28:32.081861 1225677 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.081919 1225677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.082486 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.082669 1225677 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:28:32.082688 1225677 start.go:242] waiting for startup goroutines ...
	I1217 01:28:32.082706 1225677 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:28:32.083152 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.086942 1225677 out.go:179] * Enabled addons: 
	I1217 01:28:32.089944 1225677 addons.go:530] duration metric: took 7.236595ms for enable addons: enabled=[]
	I1217 01:28:32.089983 1225677 start.go:247] waiting for cluster config update ...
	I1217 01:28:32.089992 1225677 start.go:256] writing updated cluster config ...
	I1217 01:28:32.093327 1225677 out.go:203] 
	I1217 01:28:32.096604 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.096790 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.100238 1225677 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	I1217 01:28:32.103257 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:32.106243 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:32.109227 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:32.109291 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:32.109420 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:32.109454 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:32.109592 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.109854 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:32.139073 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:32.139092 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:32.139106 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:32.139130 1225677 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:32.139181 1225677 start.go:364] duration metric: took 36.692µs to acquireMachinesLock for "ha-202151-m02"
	I1217 01:28:32.139199 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:32.139204 1225677 fix.go:54] fixHost starting: m02
	I1217 01:28:32.139463 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.170663 1225677 fix.go:112] recreateIfNeeded on ha-202151-m02: state=Stopped err=<nil>
	W1217 01:28:32.170689 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:32.173829 1225677 out.go:252] * Restarting existing docker container for "ha-202151-m02" ...
	I1217 01:28:32.173910 1225677 cli_runner.go:164] Run: docker start ha-202151-m02
	I1217 01:28:32.543486 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.572710 1225677 kic.go:430] container "ha-202151-m02" state is running.
	I1217 01:28:32.573066 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:32.602951 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.603208 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:32.603266 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:32.629641 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:32.629950 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:32.629959 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:32.630596 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37710->127.0.0.1:33963: read: connection reset by peer
	I1217 01:28:35.808896 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:35.808924 1225677 ubuntu.go:182] provisioning hostname "ha-202151-m02"
	I1217 01:28:35.808996 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:35.842137 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:35.842447 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:35.842466 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
	I1217 01:28:36.038050 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:36.038178 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.082250 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:36.082569 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:36.082593 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:36.332805 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:36.332901 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:36.332944 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:36.332991 1225677 provision.go:84] configureAuth start
	I1217 01:28:36.333104 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:36.366101 1225677 provision.go:143] copyHostCerts
	I1217 01:28:36.366154 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366188 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:36.366198 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366291 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:36.366454 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366479 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:36.366484 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366514 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:36.366576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366600 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:36.366604 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366636 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:36.366685 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
	I1217 01:28:36.714448 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:36.714609 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:36.714700 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.737234 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:36.864039 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:36.864124 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:36.913291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:36.913360 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:28:36.977060 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:36.977210 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:37.077043 1225677 provision.go:87] duration metric: took 744.017822ms to configureAuth
	I1217 01:28:37.077119 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:37.077458 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:37.077641 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:37.114203 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:37.114614 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:37.114630 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:38.749167 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:38.749190 1225677 machine.go:97] duration metric: took 6.145972988s to provisionDockerMachine
	I1217 01:28:38.749202 1225677 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
	I1217 01:28:38.749218 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:38.749280 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:38.749320 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.798164 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:38.934750 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:38.938751 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:38.938784 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:38.938805 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:38.938890 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:38.939022 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:38.939035 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:38.939161 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:38.949374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:38.977662 1225677 start.go:296] duration metric: took 228.444359ms for postStartSetup
	I1217 01:28:38.977768 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:38.977833 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.997045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.094589 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:39.100157 1225677 fix.go:56] duration metric: took 6.9609442s for fixHost
	I1217 01:28:39.100185 1225677 start.go:83] releasing machines lock for "ha-202151-m02", held for 6.960996095s
	I1217 01:28:39.100277 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:39.121509 1225677 out.go:179] * Found network options:
	I1217 01:28:39.124537 1225677 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 01:28:39.127500 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:28:39.127546 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 01:28:39.127633 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:39.127678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.127731 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:39.127813 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.159911 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.160356 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.389362 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:39.518196 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:39.518280 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:39.530690 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:39.530730 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:39.530766 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:39.530828 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:39.559452 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:39.590703 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:39.590778 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:39.623053 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:39.646277 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:39.924657 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:40.211696 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:40.211818 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:40.234789 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:40.255311 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:40.483522 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:40.697787 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:40.728627 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:40.773025 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:40.773101 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.810962 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:40.811053 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.830095 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.843899 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.859512 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:40.875469 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.891423 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.906705 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.920139 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:40.935324 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:40.949872 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:41.265195 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:11.765812 1225677 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.500580562s)
	I1217 01:30:11.765836 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:11.765895 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:11.773685 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:30:11.773748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:30:11.777914 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:30:11.832219 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:30:11.832561 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.883307 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.931713 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:30:11.934749 1225677 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 01:30:11.937773 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:30:11.958180 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:11.963975 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:11.980941 1225677 mustload.go:66] Loading cluster: ha-202151
	I1217 01:30:11.981196 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:11.981523 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:30:12.010212 1225677 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:30:12.010538 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
	I1217 01:30:12.010547 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:30:12.010562 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:12.010679 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:30:12.010721 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:30:12.010729 1225677 certs.go:257] generating profile certs ...
	I1217 01:30:12.010806 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:30:12.010871 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730
	I1217 01:30:12.010909 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:30:12.010918 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:30:12.010930 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:30:12.010942 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:30:12.010952 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:30:12.010963 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:30:12.010976 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:30:12.010988 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:30:12.010998 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:30:12.011046 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:30:12.011099 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:12.011108 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:30:12.011142 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:30:12.011167 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:12.011226 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:30:12.011276 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:30:12.011308 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.011330 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.011341 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.011405 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:30:12.040530 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:30:12.140835 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:30:12.145679 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:30:12.155103 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:30:12.158946 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:30:12.168468 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:30:12.172730 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:30:12.182622 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:30:12.186892 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:30:12.196428 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:30:12.200769 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:30:12.210174 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:30:12.214229 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:30:12.223408 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:12.242760 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:12.263233 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:12.281118 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:12.299303 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:30:12.317115 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:12.334779 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:12.352592 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:30:12.370481 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:30:12.389095 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:30:12.412594 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:12.449315 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:30:12.473400 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:30:12.494693 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:30:12.517806 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:30:12.543454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:30:12.563454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:30:12.583785 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:30:12.603782 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:30:12.611317 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.622461 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:30:12.631322 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635830 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635962 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.683099 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:12.692252 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.701723 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:12.714594 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719579 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719716 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.763558 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:12.772848 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.782803 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:30:12.792174 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.797950 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.798068 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.843461 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:12.852350 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:12.856738 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:12.902677 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:12.948658 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:12.994789 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:13.042684 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:13.096054 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:13.158401 1225677 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1217 01:30:13.158570 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:13.158615 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:30:13.158706 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:30:13.173582 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:30:13.173705 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:30:13.173834 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:13.183901 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:13.184021 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:30:13.192889 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:30:13.208806 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:13.224983 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:30:13.240987 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:13.245030 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:13.255387 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.401843 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.417093 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:13.416720 1225677 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:30:13.423303 1225677 out.go:179] * Verifying Kubernetes components...
	I1217 01:30:13.426149 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.647974 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.667990 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:30:13.668105 1225677 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:30:13.668438 1225677 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201323 1225677 node_ready.go:49] node "ha-202151-m02" is "Ready"
	I1217 01:30:14.201352 1225677 node_ready.go:38] duration metric: took 532.861298ms for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201366 1225677 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:30:14.201430 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:14.702397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.202165 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.701679 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.202436 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.701593 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.202167 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.702134 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.201871 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.202178 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.702421 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.201608 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.701963 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.201849 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.702468 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.201659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.702284 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.202447 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.701767 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.201870 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.701725 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.202161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.701566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.201668 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.702034 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.202090 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.201787 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.701530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.202044 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.702049 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.202554 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.201868 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.702179 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.202396 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.702380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.701675 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.201765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.701936 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.201563 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.701569 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.202228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.702471 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.201812 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.701808 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.201588 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.701513 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.202142 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.701610 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.201867 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.702427 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.202172 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.202404 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.701704 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.201454 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.702205 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.201850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.702118 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.201665 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.702497 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.201634 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.701590 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.202217 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.202252 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.701540 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.702332 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.202380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.701545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.202215 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.701654 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.202277 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.701599 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.202236 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.702370 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.201552 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.702331 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.201545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.202549 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.202225 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.701571 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.202016 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.702392 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.212791 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.701639 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.202292 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.701781 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.201523 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.701618 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.201666 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.702192 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.202218 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.701749 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.201582 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.701583 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.201568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.702305 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.202030 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.702244 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.201601 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.702328 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.202314 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.701594 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.202413 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.701574 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.201566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.702440 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.701568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.202474 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.701537 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:13.701628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:13.737091 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:13.737114 1225677 cri.go:89] found id: ""
	I1217 01:31:13.737124 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:13.737180 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.741133 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:13.741205 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:13.767828 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:13.767849 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:13.767854 1225677 cri.go:89] found id: ""
	I1217 01:31:13.767861 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:13.767916 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.772125 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.775836 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:13.775913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:13.807345 1225677 cri.go:89] found id: ""
	I1217 01:31:13.807369 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.807377 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:13.807384 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:13.807444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:13.838797 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:13.838817 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:13.838821 1225677 cri.go:89] found id: ""
	I1217 01:31:13.838829 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:13.838887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.843081 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.846896 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:13.846969 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:13.886939 1225677 cri.go:89] found id: ""
	I1217 01:31:13.886968 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.886977 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:13.886983 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:13.887045 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:13.927324 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:13.927350 1225677 cri.go:89] found id: ""
	I1217 01:31:13.927359 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:13.927418 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.932191 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:13.932281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:13.963576 1225677 cri.go:89] found id: ""
	I1217 01:31:13.963605 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.963614 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:13.963623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:13.963636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:14.061267 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:14.061313 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:14.083208 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:14.083318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:14.113297 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:14.113328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:14.168503 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:14.168540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:14.225258 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:14.225299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:14.254658 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:14.254688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:14.329954 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:14.329994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:14.363830 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:14.363859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:14.780185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:14.780213 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:14.780229 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:14.821746 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:14.821787 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.348276 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:17.359506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:17.359576 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:17.385494 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.385522 1225677 cri.go:89] found id: ""
	I1217 01:31:17.385531 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:17.385587 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.389291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:17.389381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:17.417467 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:17.417488 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:17.417493 1225677 cri.go:89] found id: ""
	I1217 01:31:17.417501 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:17.417557 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.421553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.425305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:17.425381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:17.452893 1225677 cri.go:89] found id: ""
	I1217 01:31:17.452925 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.452935 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:17.452945 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:17.453003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:17.479708 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.479730 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.479736 1225677 cri.go:89] found id: ""
	I1217 01:31:17.479743 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:17.479799 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.484009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.487543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:17.487617 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:17.522723 1225677 cri.go:89] found id: ""
	I1217 01:31:17.522751 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.522760 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:17.522767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:17.522829 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:17.550998 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.551023 1225677 cri.go:89] found id: ""
	I1217 01:31:17.551032 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:17.551086 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.554682 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:17.554767 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:17.587610 1225677 cri.go:89] found id: ""
	I1217 01:31:17.587650 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.587659 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:17.587684 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:17.587709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.616971 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:17.617002 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:17.692991 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:17.693034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:17.741052 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:17.741081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:17.761199 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:17.761228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.792936 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:17.793007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.845716 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:17.845753 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.881065 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:17.881096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:17.982043 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:17.982082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:18.070492 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:18.070517 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:18.070531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:18.117818 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:18.117911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.668542 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:20.679148 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:20.679242 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:20.706664 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:20.706687 1225677 cri.go:89] found id: ""
	I1217 01:31:20.706697 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:20.706757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.711072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:20.711147 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:20.737754 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:20.737779 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.737784 1225677 cri.go:89] found id: ""
	I1217 01:31:20.737792 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:20.737847 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.741755 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.745506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:20.745577 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:20.778364 1225677 cri.go:89] found id: ""
	I1217 01:31:20.778386 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.778394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:20.778400 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:20.778458 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:20.807237 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.807262 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:20.807267 1225677 cri.go:89] found id: ""
	I1217 01:31:20.807275 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:20.807361 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.811689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.815755 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:20.815857 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:20.842433 1225677 cri.go:89] found id: ""
	I1217 01:31:20.842454 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.842464 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:20.842470 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:20.842526 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:20.869792 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:20.869821 1225677 cri.go:89] found id: ""
	I1217 01:31:20.869831 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:20.869887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.873765 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:20.873847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:20.900911 1225677 cri.go:89] found id: ""
	I1217 01:31:20.900940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.900952 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:20.900961 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:20.900974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.954883 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:20.954920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:21.002822 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:21.002852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:21.108368 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:21.108406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:21.135557 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:21.135588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:21.176576 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:21.176610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:21.205927 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:21.205961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:21.232870 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:21.232897 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:21.312344 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:21.312377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:21.333806 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:21.333836 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:21.415860 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:21.415895 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:21.415909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:23.961577 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:23.974520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:23.974616 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:24.008513 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.008538 1225677 cri.go:89] found id: ""
	I1217 01:31:24.008548 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:24.008627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.013203 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:24.013311 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:24.041344 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.041369 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.041374 1225677 cri.go:89] found id: ""
	I1217 01:31:24.041383 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:24.041499 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.045778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.049690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:24.049764 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:24.076869 1225677 cri.go:89] found id: ""
	I1217 01:31:24.076902 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.076912 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:24.076919 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:24.076982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:24.115429 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.115504 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.115535 1225677 cri.go:89] found id: ""
	I1217 01:31:24.115571 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:24.115649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.121035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.126165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:24.126286 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:24.153228 1225677 cri.go:89] found id: ""
	I1217 01:31:24.153253 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.153262 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:24.153268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:24.153326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:24.196715 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:24.196801 1225677 cri.go:89] found id: ""
	I1217 01:31:24.196825 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:24.196912 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.201554 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:24.201642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:24.230189 1225677 cri.go:89] found id: ""
	I1217 01:31:24.230214 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.230223 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:24.230232 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:24.230244 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:24.308144 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:24.308188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:24.326634 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:24.326664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:24.400916 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:24.400938 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:24.400952 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.448701 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:24.448743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.482276 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:24.482309 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:24.515534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:24.515567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:24.625661 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:24.625708 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.652399 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:24.652439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.693518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:24.693556 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.750020 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:24.750059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.278748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:27.290609 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:27.290689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:27.316966 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.316991 1225677 cri.go:89] found id: ""
	I1217 01:31:27.316999 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:27.317054 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.320866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:27.320938 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:27.347398 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.347422 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.347427 1225677 cri.go:89] found id: ""
	I1217 01:31:27.347436 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:27.347496 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.351488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.355369 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:27.355442 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:27.381534 1225677 cri.go:89] found id: ""
	I1217 01:31:27.381564 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.381574 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:27.381580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:27.381662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:27.410739 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.410810 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.410822 1225677 cri.go:89] found id: ""
	I1217 01:31:27.410831 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:27.410892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.415095 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.419246 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:27.419364 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:27.447586 1225677 cri.go:89] found id: ""
	I1217 01:31:27.447612 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.447622 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:27.447629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:27.447693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:27.474916 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.474941 1225677 cri.go:89] found id: ""
	I1217 01:31:27.474950 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:27.475035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.479118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:27.479203 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:27.506051 1225677 cri.go:89] found id: ""
	I1217 01:31:27.506078 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.506087 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:27.506097 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:27.506108 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:27.545535 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:27.545568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:27.641749 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:27.641830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:27.661191 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:27.661226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:27.738097 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:27.738120 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:27.738134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.782011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:27.782048 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.834514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:27.834550 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.905140 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:27.905177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.940830 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:27.940862 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.969106 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:27.969136 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.998807 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:27.998835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:30.578811 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:30.590365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:30.590444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:30.618562 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:30.618585 1225677 cri.go:89] found id: ""
	I1217 01:31:30.618594 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:30.618677 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.623874 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:30.624003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:30.654712 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:30.654734 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.654740 1225677 cri.go:89] found id: ""
	I1217 01:31:30.654747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:30.654831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.658663 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.662256 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:30.662333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:30.690956 1225677 cri.go:89] found id: ""
	I1217 01:31:30.690983 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.691000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:30.691008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:30.691073 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:30.720079 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.720104 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.720110 1225677 cri.go:89] found id: ""
	I1217 01:31:30.720118 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:30.720190 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.724290 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.728443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:30.728569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:30.762597 1225677 cri.go:89] found id: ""
	I1217 01:31:30.762665 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.762683 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:30.762690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:30.762769 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:30.793999 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:30.794022 1225677 cri.go:89] found id: ""
	I1217 01:31:30.794031 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:30.794087 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.798031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:30.798111 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:30.825811 1225677 cri.go:89] found id: ""
	I1217 01:31:30.825838 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.825848 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:30.825858 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:30.825900 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.874308 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:30.874349 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.932548 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:30.932596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.973410 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:30.973440 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:31.061854 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:31.061893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:31.081279 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:31.081308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:31.173788 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:31.173816 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:31.173832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:31.203476 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:31.203507 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:31.242819 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:31.242857 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:31.270107 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:31.270137 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:31.301308 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:31.301338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:33.901065 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:33.913301 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:33.913455 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:33.945005 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:33.945033 1225677 cri.go:89] found id: ""
	I1217 01:31:33.945042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:33.945100 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.949030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:33.949099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:33.980996 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:33.981019 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:33.981024 1225677 cri.go:89] found id: ""
	I1217 01:31:33.981032 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:33.981090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.985533 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.989328 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:33.989424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:34.020066 1225677 cri.go:89] found id: ""
	I1217 01:31:34.020105 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.020115 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:34.020123 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:34.020214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:34.054526 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.054551 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.054558 1225677 cri.go:89] found id: ""
	I1217 01:31:34.054566 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:34.054628 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.058716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.062466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:34.062539 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:34.100752 1225677 cri.go:89] found id: ""
	I1217 01:31:34.100777 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.100787 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:34.100794 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:34.100856 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:34.133409 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.133431 1225677 cri.go:89] found id: ""
	I1217 01:31:34.133440 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:34.133498 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.137315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:34.137386 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:34.169015 1225677 cri.go:89] found id: ""
	I1217 01:31:34.169048 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.169058 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:34.169068 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:34.169081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:34.230112 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:34.230152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:34.275030 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:34.275071 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.303312 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:34.303341 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:34.323613 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:34.323791 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.377596 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:34.377632 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.405931 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:34.405961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:34.485309 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:34.485348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:34.537697 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:34.537780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:34.640362 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:34.640409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:34.719202 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:34.719227 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:34.719241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.248692 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:37.259883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:37.259952 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:37.288047 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.288071 1225677 cri.go:89] found id: ""
	I1217 01:31:37.288092 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:37.288147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.291723 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:37.291791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:37.320405 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.320468 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:37.320473 1225677 cri.go:89] found id: ""
	I1217 01:31:37.320481 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:37.320536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.324331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.327725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:37.327795 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:37.353914 1225677 cri.go:89] found id: ""
	I1217 01:31:37.353940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.353949 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:37.353956 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:37.354033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:37.380050 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.380082 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:37.380088 1225677 cri.go:89] found id: ""
	I1217 01:31:37.380097 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:37.380169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.384466 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.388616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:37.388737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:37.434167 1225677 cri.go:89] found id: ""
	I1217 01:31:37.434203 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.434213 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:37.434235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:37.434327 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:37.463397 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.463418 1225677 cri.go:89] found id: ""
	I1217 01:31:37.463426 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:37.463501 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.467357 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:37.467429 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:37.496476 1225677 cri.go:89] found id: ""
	I1217 01:31:37.496504 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.496514 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:37.496523 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:37.496534 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:37.580269 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:37.580312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:37.598989 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:37.599020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:37.669887 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:37.669956 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:37.669985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.696910 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:37.696934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.741514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:37.741546 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.797620 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:37.797657 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.827250 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:37.827277 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:37.860098 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:37.860127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:37.981956 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:37.982003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:38.045819 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:38.045855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.580761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:40.592635 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:40.592708 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:40.620832 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:40.620856 1225677 cri.go:89] found id: ""
	I1217 01:31:40.620866 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:40.620942 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.624827 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:40.624914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:40.662358 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.662381 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.662386 1225677 cri.go:89] found id: ""
	I1217 01:31:40.662394 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:40.662452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.666347 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.669969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:40.670068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:40.698897 1225677 cri.go:89] found id: ""
	I1217 01:31:40.698922 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.698931 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:40.698938 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:40.699026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:40.726184 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.726254 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.726265 1225677 cri.go:89] found id: ""
	I1217 01:31:40.726273 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:40.726331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.730221 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.734070 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:40.734150 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:40.760090 1225677 cri.go:89] found id: ""
	I1217 01:31:40.760116 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.760125 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:40.760185 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:40.760251 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:40.790670 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:40.790693 1225677 cri.go:89] found id: ""
	I1217 01:31:40.790702 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:40.790754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.794861 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:40.794936 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:40.826103 1225677 cri.go:89] found id: ""
	I1217 01:31:40.826129 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.826138 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:40.826147 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:40.826160 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.878987 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:40.879066 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.924714 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:40.924751 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.980944 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:40.980981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:41.072994 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:41.073031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:41.105014 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:41.105042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:41.212780 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:41.212818 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:41.241014 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:41.241042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:41.277652 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:41.277684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:41.308943 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:41.308972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:41.328092 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:41.328123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:41.410133 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:43.911410 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:43.924272 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:43.924351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:43.953227 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:43.953252 1225677 cri.go:89] found id: ""
	I1217 01:31:43.953261 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:43.953337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.957558 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:43.957674 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:43.984394 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:43.984493 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:43.984513 1225677 cri.go:89] found id: ""
	I1217 01:31:43.984547 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:43.984626 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.988727 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.992395 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:43.992531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:44.023165 1225677 cri.go:89] found id: ""
	I1217 01:31:44.023242 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.023265 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:44.023285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:44.023376 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:44.056175 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.056249 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.056268 1225677 cri.go:89] found id: ""
	I1217 01:31:44.056293 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:44.056373 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.060006 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.063548 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:44.063623 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:44.091849 1225677 cri.go:89] found id: ""
	I1217 01:31:44.091875 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.091886 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:44.091892 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:44.091950 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:44.125771 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.125837 1225677 cri.go:89] found id: ""
	I1217 01:31:44.125861 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:44.125938 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.129707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:44.129781 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:44.157267 1225677 cri.go:89] found id: ""
	I1217 01:31:44.157343 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.157359 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:44.157369 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:44.157380 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:44.179921 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:44.180042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:44.227426 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:44.227495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:44.268056 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:44.268089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:44.312908 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:44.312943 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.344639 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:44.344673 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.370623 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:44.370650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:44.400984 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:44.401017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:44.494253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:44.494291 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:44.563778 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:44.563859 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:44.563887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.630776 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:44.630812 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.217775 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:47.228858 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:47.228999 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:47.258264 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.258287 1225677 cri.go:89] found id: ""
	I1217 01:31:47.258305 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:47.258366 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.262265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:47.262366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:47.293485 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.293508 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.293552 1225677 cri.go:89] found id: ""
	I1217 01:31:47.293562 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:47.293623 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.297395 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.300792 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:47.300866 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:47.329792 1225677 cri.go:89] found id: ""
	I1217 01:31:47.329818 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.329827 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:47.329833 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:47.329890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:47.356681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.356747 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:47.356758 1225677 cri.go:89] found id: ""
	I1217 01:31:47.356767 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:47.356839 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.360948 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.364494 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:47.364598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:47.390993 1225677 cri.go:89] found id: ""
	I1217 01:31:47.391021 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.391031 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:47.391037 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:47.391099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:47.417453 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.417517 1225677 cri.go:89] found id: ""
	I1217 01:31:47.417541 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:47.417618 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.421365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:47.421437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:47.447227 1225677 cri.go:89] found id: ""
	I1217 01:31:47.447254 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.447264 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:47.447273 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:47.447285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.474445 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:47.474475 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:47.546929 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:47.546947 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:47.546962 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.621943 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:47.621985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:47.653654 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:47.653679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:47.751509 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:47.751548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:47.773290 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:47.773323 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.802347 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:47.802378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.849646 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:47.849680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.894275 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:47.894315 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.949242 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:47.949281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.480769 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:50.491711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:50.491827 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:50.519320 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.519345 1225677 cri.go:89] found id: ""
	I1217 01:31:50.519353 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:50.519440 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.523424 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:50.523533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:50.551627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:50.551652 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:50.551658 1225677 cri.go:89] found id: ""
	I1217 01:31:50.551665 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:50.551751 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.555585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.559244 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:50.559347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:50.586218 1225677 cri.go:89] found id: ""
	I1217 01:31:50.586241 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.586249 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:50.586255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:50.586333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:50.618629 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.618661 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.618667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.618675 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:50.618776 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.622850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.626687 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:50.626824 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:50.659667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.659703 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.659713 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:50.659738 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:50.659817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:50.686997 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.687069 1225677 cri.go:89] found id: ""
	I1217 01:31:50.687092 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:50.687160 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.690709 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:50.690823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:50.721432 1225677 cri.go:89] found id: ""
	I1217 01:31:50.721509 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.721534 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:50.721553 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:50.721583 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.748223 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:50.748250 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.807290 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:50.807328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.835575 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:50.835603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.861513 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:50.861539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:50.937079 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:50.937118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:51.023701 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:51.023722 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:51.023736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:51.063322 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:51.063360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:51.134936 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:51.134983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:51.172581 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:51.172611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:51.279920 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:51.279958 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:53.800293 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:53.813493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:53.813572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:53.855699 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:53.855727 1225677 cri.go:89] found id: ""
	I1217 01:31:53.855737 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:53.855790 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.860842 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:53.860915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:53.905688 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:53.905715 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:53.905720 1225677 cri.go:89] found id: ""
	I1217 01:31:53.905727 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:53.905796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.911027 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.916033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:53.916105 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:53.971312 1225677 cri.go:89] found id: ""
	I1217 01:31:53.971339 1225677 logs.go:282] 0 containers: []
	W1217 01:31:53.971349 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:53.971356 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:53.971477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:54.021427 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.021456 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:54.021474 1225677 cri.go:89] found id: ""
	I1217 01:31:54.021488 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:54.021585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.030798 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.035177 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:54.035371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:54.113099 1225677 cri.go:89] found id: ""
	I1217 01:31:54.113124 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.113133 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:54.113139 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:54.113246 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:54.166627 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.166651 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.166658 1225677 cri.go:89] found id: ""
	I1217 01:31:54.166665 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:54.166783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.171754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.182182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:54.182283 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:54.234503 1225677 cri.go:89] found id: ""
	I1217 01:31:54.234567 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.234591 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:54.234615 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:54.234642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.275461 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:54.275532 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:54.366758 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:54.366801 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:54.403474 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:54.403513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:54.422090 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:54.422131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:54.486461 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:54.486497 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.553429 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:54.553466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.599563 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:54.599593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:54.706755 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:54.706795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:54.812798 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:54.812822 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:54.812835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:54.838401 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:54.838433 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:54.893784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:54.893823 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.427168 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:57.438551 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:57.438655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:57.468636 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:57.468660 1225677 cri.go:89] found id: ""
	I1217 01:31:57.468669 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:57.468726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.472745 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:57.472819 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:57.500682 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:57.500702 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.500707 1225677 cri.go:89] found id: ""
	I1217 01:31:57.500714 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:57.500777 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.504719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.508458 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:57.508557 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:57.540789 1225677 cri.go:89] found id: ""
	I1217 01:31:57.540813 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.540822 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:57.540828 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:57.540889 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:57.570366 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.570392 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.570398 1225677 cri.go:89] found id: ""
	I1217 01:31:57.570406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:57.570462 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.574531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.578702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:57.578782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:57.608017 1225677 cri.go:89] found id: ""
	I1217 01:31:57.608042 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.608051 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:57.608058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:57.608122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:57.634195 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:57.634218 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.634224 1225677 cri.go:89] found id: ""
	I1217 01:31:57.634232 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:57.634317 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.638339 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.642068 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:57.642166 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:57.669214 1225677 cri.go:89] found id: ""
	I1217 01:31:57.669250 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.669259 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:57.669268 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:57.669284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.733958 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:57.733991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.790688 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:57.790731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.825378 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:57.825409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:57.903425 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:57.903465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:57.977243 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:57.977266 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:57.977280 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:58.008228 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:58.008262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:58.044832 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:58.044861 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:58.076961 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:58.077009 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:58.174022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:58.174061 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:58.194526 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:58.194561 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:58.225629 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:58.225658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.768659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:00.779781 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:00.779855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:00.809961 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:00.809984 1225677 cri.go:89] found id: ""
	I1217 01:32:00.809993 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:00.810055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.814113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:00.814232 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:00.842110 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.842179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:00.842193 1225677 cri.go:89] found id: ""
	I1217 01:32:00.842202 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:00.842259 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.846284 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.850463 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:00.850535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:00.877321 1225677 cri.go:89] found id: ""
	I1217 01:32:00.877347 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.877357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:00.877364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:00.877424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:00.903950 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:00.904025 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:00.904044 1225677 cri.go:89] found id: ""
	I1217 01:32:00.904065 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:00.904183 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.907995 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.911685 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:00.911762 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:00.940826 1225677 cri.go:89] found id: ""
	I1217 01:32:00.940856 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.940865 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:00.940871 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:00.940931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:00.967056 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:00.967077 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:00.967088 1225677 cri.go:89] found id: ""
	I1217 01:32:00.967097 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:00.967175 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.970953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.975717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:00.975791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:01.010237 1225677 cri.go:89] found id: ""
	I1217 01:32:01.010262 1225677 logs.go:282] 0 containers: []
	W1217 01:32:01.010272 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:01.010281 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:01.010294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:01.030320 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:01.030353 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:01.055381 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:01.055409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:01.097515 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:01.097548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:01.166756 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:01.166797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:01.208792 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:01.208824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:01.246024 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:01.246056 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:01.340436 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:01.340519 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:01.412662 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:01.412684 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:01.412699 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:01.467190 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:01.467228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:01.500459 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:01.500486 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:01.531449 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:01.531477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:04.134627 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:04.145902 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:04.145978 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:04.185746 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.185766 1225677 cri.go:89] found id: ""
	I1217 01:32:04.185774 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:04.185831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.189797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:04.189867 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:04.228673 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.228694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.228698 1225677 cri.go:89] found id: ""
	I1217 01:32:04.228706 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:04.228759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.233260 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.238075 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:04.238212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:04.268955 1225677 cri.go:89] found id: ""
	I1217 01:32:04.268983 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.268992 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:04.268999 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:04.269102 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:04.299973 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.300041 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.300061 1225677 cri.go:89] found id: ""
	I1217 01:32:04.300088 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:04.300185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.303813 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.307456 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:04.307533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:04.334293 1225677 cri.go:89] found id: ""
	I1217 01:32:04.334319 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.334331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:04.334338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:04.334398 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:04.360886 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.360906 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.360910 1225677 cri.go:89] found id: ""
	I1217 01:32:04.360918 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:04.360974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.365024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.368933 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:04.369005 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:04.397116 1225677 cri.go:89] found id: ""
	I1217 01:32:04.397140 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.397149 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:04.397159 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:04.397174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:04.490637 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:04.490721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.531861 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:04.531938 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.577801 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:04.577838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.635487 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:04.635524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.667260 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:04.667290 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:04.718117 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:04.718146 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:04.737680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:04.737711 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:04.825872 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:04.825894 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:04.825908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.858804 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:04.858833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.887920 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:04.887953 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.916371 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:04.916476 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:07.492728 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:07.504442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:07.504532 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:07.538372 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.538403 1225677 cri.go:89] found id: ""
	I1217 01:32:07.538442 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:07.538517 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.542523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:07.542597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:07.576339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:07.576360 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:07.576364 1225677 cri.go:89] found id: ""
	I1217 01:32:07.576372 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:07.576459 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.580149 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.584111 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:07.584196 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:07.610578 1225677 cri.go:89] found id: ""
	I1217 01:32:07.610605 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.610614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:07.610621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:07.610678 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:07.637129 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:07.637151 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:07.637157 1225677 cri.go:89] found id: ""
	I1217 01:32:07.637164 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:07.637217 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.641090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.644872 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:07.644992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:07.679300 1225677 cri.go:89] found id: ""
	I1217 01:32:07.679322 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.679331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:07.679350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:07.679419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:07.719129 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:07.719155 1225677 cri.go:89] found id: ""
	I1217 01:32:07.719164 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:07.719231 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.723681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:07.723755 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:07.756924 1225677 cri.go:89] found id: ""
	I1217 01:32:07.756950 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.756969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:07.756979 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:07.756991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:07.856049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:07.856088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:07.935429 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:07.935456 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:07.935469 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.961013 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:07.961042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:08.005989 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:08.006024 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:08.039061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:08.039092 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:08.058159 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:08.058194 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:08.112456 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:08.112490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:08.176389 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:08.176457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:08.215782 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:08.215809 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:08.244713 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:08.244743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:10.828143 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:10.838717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:10.838793 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:10.869672 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:10.869696 1225677 cri.go:89] found id: ""
	I1217 01:32:10.869705 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:10.869761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.873603 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:10.873720 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:10.900811 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:10.900837 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:10.900843 1225677 cri.go:89] found id: ""
	I1217 01:32:10.900851 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:10.900906 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.904643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.908193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:10.908261 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:10.935598 1225677 cri.go:89] found id: ""
	I1217 01:32:10.935624 1225677 logs.go:282] 0 containers: []
	W1217 01:32:10.935634 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:10.935641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:10.935698 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:10.966869 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:10.966894 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:10.966899 1225677 cri.go:89] found id: ""
	I1217 01:32:10.966907 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:10.966962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.970920 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.974605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:10.974715 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:11.012577 1225677 cri.go:89] found id: ""
	I1217 01:32:11.012602 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.012612 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:11.012618 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:11.012680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:11.048075 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.048100 1225677 cri.go:89] found id: ""
	I1217 01:32:11.048130 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:11.048185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:11.052014 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:11.052089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:11.084486 1225677 cri.go:89] found id: ""
	I1217 01:32:11.084511 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.084524 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:11.084533 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:11.084545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:11.192042 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:11.192076 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:11.218345 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:11.218378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:11.261837 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:11.261869 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:11.321100 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:11.321138 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:11.356360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:11.356390 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:11.433012 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:11.433054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:11.511248 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:11.511270 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:11.511287 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:11.549584 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:11.549614 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:11.596753 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:11.596786 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.626208 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:11.626240 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.173611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:14.187629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:14.187704 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:14.223146 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.223170 1225677 cri.go:89] found id: ""
	I1217 01:32:14.223179 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:14.223264 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.227607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:14.227721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:14.255753 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:14.255791 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.255796 1225677 cri.go:89] found id: ""
	I1217 01:32:14.255804 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:14.255881 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.259963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.263644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:14.263717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:14.290575 1225677 cri.go:89] found id: ""
	I1217 01:32:14.290599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.290614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:14.290621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:14.290681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:14.318287 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.318309 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.318314 1225677 cri.go:89] found id: ""
	I1217 01:32:14.318323 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:14.318378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.322352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.326073 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:14.326157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:14.352179 1225677 cri.go:89] found id: ""
	I1217 01:32:14.352205 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.352214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:14.352221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:14.352304 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:14.380539 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.380565 1225677 cri.go:89] found id: ""
	I1217 01:32:14.380582 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:14.380678 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.385134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:14.385210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:14.417374 1225677 cri.go:89] found id: ""
	I1217 01:32:14.417407 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.417417 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:14.417441 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:14.417457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.464173 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:14.464209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.491958 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:14.492035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.547112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:14.547180 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:14.617502 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:14.617525 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:14.617548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.645669 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:14.645697 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.705027 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:14.705070 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.738615 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:14.738689 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:14.819881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:14.819961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:14.917702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:14.917739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:14.940092 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:14.940127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.482077 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:17.493126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:17.493227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:17.520116 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.520137 1225677 cri.go:89] found id: ""
	I1217 01:32:17.520155 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:17.520234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.524492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:17.524572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:17.553355 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.553419 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:17.553439 1225677 cri.go:89] found id: ""
	I1217 01:32:17.553454 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:17.553512 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.557145 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.560580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:17.560663 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:17.586798 1225677 cri.go:89] found id: ""
	I1217 01:32:17.586824 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.586843 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:17.586850 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:17.586915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:17.614063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.614096 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:17.614102 1225677 cri.go:89] found id: ""
	I1217 01:32:17.614110 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:17.614174 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.618083 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.621593 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:17.621662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:17.652917 1225677 cri.go:89] found id: ""
	I1217 01:32:17.652943 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.652964 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:17.652972 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:17.653029 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:17.679412 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.679435 1225677 cri.go:89] found id: ""
	I1217 01:32:17.679443 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:17.679508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.683530 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:17.683606 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:17.714591 1225677 cri.go:89] found id: ""
	I1217 01:32:17.714618 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.714628 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:17.714638 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:17.714652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.774158 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:17.774193 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.802731 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:17.802759 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:17.837385 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:17.837413 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:17.948723 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:17.948766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:17.967594 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:17.967622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.997257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:17.997350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:18.046163 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:18.046204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:18.075264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:18.075345 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:18.179955 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:18.180007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:18.261983 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:18.262017 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:18.262034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.814850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:20.826637 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:20.826710 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:20.867818 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:20.867839 1225677 cri.go:89] found id: ""
	I1217 01:32:20.867847 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:20.867902 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.871814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:20.871895 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:20.902722 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.902742 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:20.902746 1225677 cri.go:89] found id: ""
	I1217 01:32:20.902755 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:20.902808 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.907236 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.911156 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:20.911230 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:20.937933 1225677 cri.go:89] found id: ""
	I1217 01:32:20.937959 1225677 logs.go:282] 0 containers: []
	W1217 01:32:20.937968 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:20.937974 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:20.938063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:20.965558 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:20.965581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:20.965587 1225677 cri.go:89] found id: ""
	I1217 01:32:20.965595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:20.965652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.969565 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.973428 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:20.973498 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:21.012487 1225677 cri.go:89] found id: ""
	I1217 01:32:21.012512 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.012521 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:21.012527 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:21.012590 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:21.041411 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.041443 1225677 cri.go:89] found id: ""
	I1217 01:32:21.041455 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:21.041515 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:21.045571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:21.045672 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:21.074982 1225677 cri.go:89] found id: ""
	I1217 01:32:21.075005 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.075014 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:21.075023 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:21.075036 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:21.105151 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:21.105181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.131324 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:21.131398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:21.228426 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:21.228461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:21.285988 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:21.286020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:21.369964 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:21.370005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:21.406263 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:21.406295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:21.425680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:21.425710 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:21.503044 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:21.503067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:21.503083 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:21.533119 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:21.533147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:21.584619 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:21.584652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.145239 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:24.156031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:24.156112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:24.191491 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.191515 1225677 cri.go:89] found id: ""
	I1217 01:32:24.191523 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:24.191579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.196271 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:24.196344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:24.229412 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.229433 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.229437 1225677 cri.go:89] found id: ""
	I1217 01:32:24.229445 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:24.229502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.233353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.237055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:24.237137 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:24.264226 1225677 cri.go:89] found id: ""
	I1217 01:32:24.264252 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.264262 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:24.264268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:24.264330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:24.300946 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.300972 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.300977 1225677 cri.go:89] found id: ""
	I1217 01:32:24.300984 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:24.301038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.304900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.308160 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:24.308277 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:24.334573 1225677 cri.go:89] found id: ""
	I1217 01:32:24.334596 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.334606 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:24.334612 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:24.334670 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:24.367769 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.367791 1225677 cri.go:89] found id: ""
	I1217 01:32:24.367800 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:24.367853 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.371482 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:24.371586 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:24.398071 1225677 cri.go:89] found id: ""
	I1217 01:32:24.398095 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.398104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:24.398112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:24.398124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:24.466998 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:24.467073 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:24.467093 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.494797 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:24.494826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.566818 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:24.566859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.627760 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:24.627797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.657250 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:24.657278 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.683514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:24.683549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:24.703093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:24.703129 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.757376 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:24.757411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:24.839791 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:24.839826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:24.883947 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:24.883978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:27.492559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:27.503372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:27.503445 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:27.541590 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:27.541611 1225677 cri.go:89] found id: ""
	I1217 01:32:27.541620 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:27.541675 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.545373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:27.545448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:27.571462 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:27.571486 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:27.571491 1225677 cri.go:89] found id: ""
	I1217 01:32:27.571499 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:27.571555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.575671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.579240 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:27.579332 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:27.612215 1225677 cri.go:89] found id: ""
	I1217 01:32:27.612245 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.612254 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:27.612261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:27.612339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:27.639672 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:27.639696 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.639701 1225677 cri.go:89] found id: ""
	I1217 01:32:27.639708 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:27.639782 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.643953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.647820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:27.647942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:27.673115 1225677 cri.go:89] found id: ""
	I1217 01:32:27.673141 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.673150 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:27.673157 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:27.673215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:27.703404 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.703428 1225677 cri.go:89] found id: ""
	I1217 01:32:27.703437 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:27.703566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.708031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:27.708106 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:27.736748 1225677 cri.go:89] found id: ""
	I1217 01:32:27.736770 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.736779 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:27.736789 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:27.736802 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.763699 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:27.763727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.790990 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:27.791020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:27.871644 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:27.871680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:27.904392 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:27.904499 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:27.926297 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:27.926333 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:28.002149 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:28.002177 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:28.002196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:28.030901 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:28.030933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:28.070431 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:28.070463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:28.124957 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:28.124994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:28.185427 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:28.185465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:30.787761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:30.798953 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:30.799025 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:30.826532 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:30.826561 1225677 cri.go:89] found id: ""
	I1217 01:32:30.826570 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:30.826631 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.830429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:30.830503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:30.856397 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:30.856449 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:30.856462 1225677 cri.go:89] found id: ""
	I1217 01:32:30.856470 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:30.856524 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.860460 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.864121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:30.864204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:30.893119 1225677 cri.go:89] found id: ""
	I1217 01:32:30.893143 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.893153 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:30.893166 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:30.893225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:30.942371 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:30.942393 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:30.942398 1225677 cri.go:89] found id: ""
	I1217 01:32:30.942406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:30.942463 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.947748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.953053 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:30.953140 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:30.991763 1225677 cri.go:89] found id: ""
	I1217 01:32:30.991793 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.991802 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:30.991817 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:30.991888 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:31.026936 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.026958 1225677 cri.go:89] found id: ""
	I1217 01:32:31.026967 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:31.027022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:31.031253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:31.031338 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:31.060606 1225677 cri.go:89] found id: ""
	I1217 01:32:31.060632 1225677 logs.go:282] 0 containers: []
	W1217 01:32:31.060641 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:31.060650 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:31.060666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.089805 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:31.089837 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:31.179774 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:31.179814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:31.231705 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:31.231739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:31.264982 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:31.265014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:31.295319 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:31.295348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:31.398598 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:31.398635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:31.418439 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:31.418473 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:31.505328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:31.505348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:31.505364 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:31.534574 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:31.534604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:31.584571 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:31.584607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.145660 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:34.156555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:34.156680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:34.189334 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.189353 1225677 cri.go:89] found id: ""
	I1217 01:32:34.189361 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:34.189415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.193025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:34.193117 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:34.229137 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.229160 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.229165 1225677 cri.go:89] found id: ""
	I1217 01:32:34.229176 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:34.229234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.232921 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.236260 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:34.236361 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:34.264990 1225677 cri.go:89] found id: ""
	I1217 01:32:34.265013 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.265022 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:34.265028 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:34.265086 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:34.292130 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.292205 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.292225 1225677 cri.go:89] found id: ""
	I1217 01:32:34.292250 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:34.292344 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.295987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.299388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:34.299500 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:34.325943 1225677 cri.go:89] found id: ""
	I1217 01:32:34.326026 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.326042 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:34.326049 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:34.326108 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:34.363328 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.363351 1225677 cri.go:89] found id: ""
	I1217 01:32:34.363361 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:34.363415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.367803 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:34.367878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:34.394984 1225677 cri.go:89] found id: ""
	I1217 01:32:34.395011 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.395020 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:34.395029 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:34.395065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:34.470015 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:34.470036 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:34.470049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.496057 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:34.496091 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.549522 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:34.549555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.592693 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:34.592728 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.652425 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:34.652505 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.680716 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:34.680747 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.707492 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:34.707522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:34.787410 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:34.787492 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:34.892246 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:34.892284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:34.910499 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:34.910530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:37.463203 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:37.474127 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:37.474200 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:37.506946 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.507018 1225677 cri.go:89] found id: ""
	I1217 01:32:37.507042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:37.507123 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.511460 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:37.511535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:37.546992 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:37.547014 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:37.547020 1225677 cri.go:89] found id: ""
	I1217 01:32:37.547028 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:37.547090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.550864 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.554364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:37.554450 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:37.592224 1225677 cri.go:89] found id: ""
	I1217 01:32:37.592353 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.592394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:37.592437 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:37.592579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:37.620557 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.620581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:37.620587 1225677 cri.go:89] found id: ""
	I1217 01:32:37.620595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:37.620691 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.624719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.628465 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:37.628541 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:37.657843 1225677 cri.go:89] found id: ""
	I1217 01:32:37.657870 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.657878 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:37.657885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:37.657955 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:37.686792 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.686825 1225677 cri.go:89] found id: ""
	I1217 01:32:37.686834 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:37.686898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.690651 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:37.690783 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:37.719977 1225677 cri.go:89] found id: ""
	I1217 01:32:37.720000 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.720009 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:37.720018 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:37.720030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:37.738580 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:37.738610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:37.814847 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:37.814869 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:37.814883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.840694 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:37.840723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.901817 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:37.901855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.935757 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:37.935839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:38.014642 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:38.014679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:38.115079 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:38.115123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:38.157390 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:38.157423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:38.204086 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:38.204123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:38.235323 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:38.235355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:40.766175 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:40.777746 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:40.777818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:40.809026 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:40.809051 1225677 cri.go:89] found id: ""
	I1217 01:32:40.809060 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:40.809157 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.813212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:40.813294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:40.840793 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:40.840821 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:40.840826 1225677 cri.go:89] found id: ""
	I1217 01:32:40.840834 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:40.840915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.845018 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.848655 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:40.848732 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:40.875726 1225677 cri.go:89] found id: ""
	I1217 01:32:40.875750 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.875761 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:40.875767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:40.875825 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:40.902504 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:40.902527 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:40.902532 1225677 cri.go:89] found id: ""
	I1217 01:32:40.902540 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:40.902593 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.906394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.910259 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:40.910330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:40.936570 1225677 cri.go:89] found id: ""
	I1217 01:32:40.936599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.936609 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:40.936616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:40.936676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:40.964358 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:40.964381 1225677 cri.go:89] found id: ""
	I1217 01:32:40.964389 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:40.964541 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.968221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:40.968292 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:40.998606 1225677 cri.go:89] found id: ""
	I1217 01:32:40.998633 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.998644 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:40.998654 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:40.998668 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:41.022520 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:41.022551 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:41.051598 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:41.051625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:41.091115 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:41.091148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:41.159179 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:41.159223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:41.190970 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:41.190997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:41.225786 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:41.225815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:41.294484 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:41.294509 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:41.294523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:41.346979 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:41.347017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:41.374095 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:41.374126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:41.456622 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:41.456658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.066375 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:44.077293 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:44.077365 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:44.104332 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.104476 1225677 cri.go:89] found id: ""
	I1217 01:32:44.104504 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:44.104580 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.108715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:44.108799 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:44.140649 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.140672 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.140677 1225677 cri.go:89] found id: ""
	I1217 01:32:44.140684 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:44.140763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.144834 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.148730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:44.148811 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:44.197233 1225677 cri.go:89] found id: ""
	I1217 01:32:44.197259 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.197268 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:44.197274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:44.197350 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:44.240339 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:44.240363 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.240368 1225677 cri.go:89] found id: ""
	I1217 01:32:44.240376 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:44.240456 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.244962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.248793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:44.248913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:44.278464 1225677 cri.go:89] found id: ""
	I1217 01:32:44.278491 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.278501 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:44.278507 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:44.278585 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:44.308914 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.308938 1225677 cri.go:89] found id: ""
	I1217 01:32:44.308958 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:44.309048 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.313878 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:44.313951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:44.344530 1225677 cri.go:89] found id: ""
	I1217 01:32:44.344555 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.344577 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:44.344588 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:44.344600 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.372833 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:44.372864 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:44.452952 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:44.452990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:44.474609 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:44.474642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:44.552482 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:44.552507 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:44.552521 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.580322 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:44.580352 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.610292 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:44.610320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:44.643236 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:44.643266 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.755542 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:44.755601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.808715 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:44.808771 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.856301 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:44.856338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.419847 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:47.431877 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:47.431951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:47.461659 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:47.461682 1225677 cri.go:89] found id: ""
	I1217 01:32:47.461690 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:47.461747 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.465698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:47.465822 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:47.495157 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.495179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.495184 1225677 cri.go:89] found id: ""
	I1217 01:32:47.495192 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:47.495247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.499337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.503995 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:47.504080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:47.543135 1225677 cri.go:89] found id: ""
	I1217 01:32:47.543158 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.543167 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:47.543174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:47.543238 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:47.572765 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.572791 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:47.572797 1225677 cri.go:89] found id: ""
	I1217 01:32:47.572804 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:47.572867 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.577796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.581659 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:47.581760 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:47.612595 1225677 cri.go:89] found id: ""
	I1217 01:32:47.612660 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.612674 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:47.612681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:47.612744 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:47.642199 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:47.642223 1225677 cri.go:89] found id: ""
	I1217 01:32:47.642231 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:47.642287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.646215 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:47.646285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:47.672805 1225677 cri.go:89] found id: ""
	I1217 01:32:47.672830 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.672839 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:47.672849 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:47.672859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:47.702885 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:47.702917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:47.723284 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:47.723318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:47.799644 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:47.799674 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:47.799688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.839852 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:47.839884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.888519 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:47.888557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:47.973305 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:47.973344 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:48.081814 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:48.081853 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:48.114561 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:48.114590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:48.208193 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:48.208234 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:48.241262 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:48.241293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.770940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:50.781882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:50.781951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:50.809569 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:50.809595 1225677 cri.go:89] found id: ""
	I1217 01:32:50.809604 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:50.809665 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.814519 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:50.814594 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:50.849443 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:50.849472 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:50.849478 1225677 cri.go:89] found id: ""
	I1217 01:32:50.849486 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:50.849564 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.853510 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.857119 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:50.857224 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:50.888246 1225677 cri.go:89] found id: ""
	I1217 01:32:50.888275 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.888284 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:50.888291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:50.888351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:50.916294 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:50.916320 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:50.916326 1225677 cri.go:89] found id: ""
	I1217 01:32:50.916333 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:50.916388 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.920299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.924658 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:50.924730 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:50.957966 1225677 cri.go:89] found id: ""
	I1217 01:32:50.957994 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.958003 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:50.958009 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:50.958069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:50.991282 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.991304 1225677 cri.go:89] found id: ""
	I1217 01:32:50.991312 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:50.991377 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.995730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:50.995797 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:51.034122 1225677 cri.go:89] found id: ""
	I1217 01:32:51.034199 1225677 logs.go:282] 0 containers: []
	W1217 01:32:51.034238 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:51.034266 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:51.034295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:51.062022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:51.062100 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:51.081698 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:51.081733 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:51.112382 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:51.112482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:51.172152 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:51.172190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:51.213603 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:51.213634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:51.297400 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:51.297439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:51.331335 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:51.331412 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:51.426253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:51.426289 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:51.499310 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:51.499332 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:51.499348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:51.572760 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:51.572795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.122214 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:54.133644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:54.133721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:54.162887 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.162912 1225677 cri.go:89] found id: ""
	I1217 01:32:54.162922 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:54.162978 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.167057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:54.167127 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:54.205900 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.205920 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.205925 1225677 cri.go:89] found id: ""
	I1217 01:32:54.205932 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:54.205987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.210350 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.214343 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:54.214419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:54.246321 1225677 cri.go:89] found id: ""
	I1217 01:32:54.246348 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.246357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:54.246364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:54.246424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:54.276281 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.276305 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.276310 1225677 cri.go:89] found id: ""
	I1217 01:32:54.276319 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:54.276379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.281009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.285204 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:54.285281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:54.311149 1225677 cri.go:89] found id: ""
	I1217 01:32:54.311225 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.311251 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:54.311268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:54.311342 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:54.339737 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.339763 1225677 cri.go:89] found id: ""
	I1217 01:32:54.339771 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:54.339825 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.343615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:54.343749 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:54.370945 1225677 cri.go:89] found id: ""
	I1217 01:32:54.370971 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.370981 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:54.370991 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:54.371003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:54.390464 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:54.390495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:54.470328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:54.470363 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:54.470377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.495970 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:54.495999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.557300 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:54.557336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.585791 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:54.585821 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.612126 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:54.612152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:54.653218 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:54.653246 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:54.752385 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:54.752432 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.814139 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:54.814175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.885191 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:54.885226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:57.468539 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:57.479841 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:57.479913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:57.511032 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.511058 1225677 cri.go:89] found id: ""
	I1217 01:32:57.511067 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:57.511130 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.515373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:57.515446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:57.558508 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.558531 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:57.558537 1225677 cri.go:89] found id: ""
	I1217 01:32:57.558550 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:57.558622 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.563150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.567245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:57.567322 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:57.594294 1225677 cri.go:89] found id: ""
	I1217 01:32:57.594330 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.594341 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:57.594347 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:57.594411 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:57.626077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:57.626100 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.626106 1225677 cri.go:89] found id: ""
	I1217 01:32:57.626114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:57.626173 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.630289 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.634055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:57.634130 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:57.661683 1225677 cri.go:89] found id: ""
	I1217 01:32:57.661711 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.661721 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:57.661727 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:57.661785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:57.690521 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.690556 1225677 cri.go:89] found id: ""
	I1217 01:32:57.690565 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:57.690632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.694587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:57.694687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:57.721760 1225677 cri.go:89] found id: ""
	I1217 01:32:57.721783 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.721792 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:57.721801 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:57.721830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.749279 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:57.749308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.781988 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:57.782017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:57.820059 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:57.820089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:57.841084 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:57.841121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.884653 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:57.884752 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.932570 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:57.932605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:58.015607 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:58.015649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:58.116442 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:58.116479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:58.205896 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:58.205921 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:58.205934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:58.252524 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:58.252595 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.831933 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:00.843915 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:00.844011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:00.872994 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:00.873018 1225677 cri.go:89] found id: ""
	I1217 01:33:00.873027 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:00.873080 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.876819 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:00.876914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:00.904306 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:00.904329 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:00.904334 1225677 cri.go:89] found id: ""
	I1217 01:33:00.904342 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:00.904397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.908029 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.911563 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:00.911642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:00.940652 1225677 cri.go:89] found id: ""
	I1217 01:33:00.940678 1225677 logs.go:282] 0 containers: []
	W1217 01:33:00.940687 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:00.940694 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:00.940752 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:00.967462 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.967503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:00.967514 1225677 cri.go:89] found id: ""
	I1217 01:33:00.967522 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:00.967601 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.971689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.976107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:00.976187 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:01.015150 1225677 cri.go:89] found id: ""
	I1217 01:33:01.015230 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.015253 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:01.015273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:01.015366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:01.044488 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.044553 1225677 cri.go:89] found id: ""
	I1217 01:33:01.044578 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:01.044671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:01.048372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:01.048523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:01.083014 1225677 cri.go:89] found id: ""
	I1217 01:33:01.083096 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.083121 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:01.083173 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:01.083208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:01.181547 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:01.181588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:01.202930 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:01.202966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:01.255543 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:01.255580 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:01.282899 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:01.282927 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.310357 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:01.310387 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:01.361428 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:01.361458 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:01.439491 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:01.439564 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:01.439594 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:01.466548 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:01.466575 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:01.524293 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:01.524332 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:01.603276 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:01.603314 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.194004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:04.206859 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:04.206931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:04.245597 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.245621 1225677 cri.go:89] found id: ""
	I1217 01:33:04.245630 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:04.245688 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.249418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:04.249489 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:04.278257 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.278277 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.278284 1225677 cri.go:89] found id: ""
	I1217 01:33:04.278291 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:04.278405 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.282613 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.286801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:04.286878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:04.313756 1225677 cri.go:89] found id: ""
	I1217 01:33:04.313825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.313852 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:04.313866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:04.313946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:04.343505 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.343528 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.343533 1225677 cri.go:89] found id: ""
	I1217 01:33:04.343542 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:04.343595 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.347432 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.351245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:04.351318 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:04.378415 1225677 cri.go:89] found id: ""
	I1217 01:33:04.378443 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.378453 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:04.378461 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:04.378523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:04.404603 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.404635 1225677 cri.go:89] found id: ""
	I1217 01:33:04.404645 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:04.404699 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.408372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:04.408490 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:04.435025 1225677 cri.go:89] found id: ""
	I1217 01:33:04.435053 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.435063 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:04.435072 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:04.435084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:04.453398 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:04.453431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:04.532185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:04.532207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:04.532220 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.565093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:04.565122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.608097 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:04.608141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.669592 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:04.669635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.698199 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:04.698230 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.781891 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:04.781933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:04.889443 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:04.889483 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.935503 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:04.935540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.962255 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:04.962288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.497519 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:07.509544 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:07.509619 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:07.541912 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.541930 1225677 cri.go:89] found id: ""
	I1217 01:33:07.541938 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:07.541998 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.545880 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:07.545967 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:07.576061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.576085 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:07.576090 1225677 cri.go:89] found id: ""
	I1217 01:33:07.576098 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:07.576156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.580118 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.584118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:07.584216 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:07.613260 1225677 cri.go:89] found id: ""
	I1217 01:33:07.613288 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.613297 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:07.613304 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:07.613390 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:07.643089 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:07.643113 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:07.643118 1225677 cri.go:89] found id: ""
	I1217 01:33:07.643126 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:07.643181 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.646892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.650360 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:07.650433 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:07.677367 1225677 cri.go:89] found id: ""
	I1217 01:33:07.677393 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.677403 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:07.677409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:07.677515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:07.705475 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.705499 1225677 cri.go:89] found id: ""
	I1217 01:33:07.705508 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:07.705588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.709429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:07.709538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:07.737814 1225677 cri.go:89] found id: ""
	I1217 01:33:07.737838 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.737846 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:07.737855 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:07.737867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.767138 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:07.767166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.800084 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:07.800165 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:07.820093 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:07.820124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:07.887706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:07.887729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:07.887744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.915091 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:07.915122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.956054 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:07.956116 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:08.019066 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:08.019105 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:08.080377 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:08.080423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:08.124710 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:08.124793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:08.214495 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:08.214593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:10.827104 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:10.838284 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:10.838422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:10.874165 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:10.874184 1225677 cri.go:89] found id: ""
	I1217 01:33:10.874192 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:10.874245 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.878108 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:10.878180 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:10.903766 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:10.903789 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:10.903794 1225677 cri.go:89] found id: ""
	I1217 01:33:10.903802 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:10.903857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.907574 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.911142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:10.911214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:10.938246 1225677 cri.go:89] found id: ""
	I1217 01:33:10.938273 1225677 logs.go:282] 0 containers: []
	W1217 01:33:10.938283 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:10.938289 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:10.938347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:10.964843 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:10.964866 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:10.964871 1225677 cri.go:89] found id: ""
	I1217 01:33:10.964879 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:10.964935 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.968730 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.972392 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:10.972503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:11.008562 1225677 cri.go:89] found id: ""
	I1217 01:33:11.008590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.008600 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:11.008607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:11.008716 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:11.041307 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.041342 1225677 cri.go:89] found id: ""
	I1217 01:33:11.041352 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:11.041408 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:11.045319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:11.045394 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:11.072727 1225677 cri.go:89] found id: ""
	I1217 01:33:11.072757 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.072771 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:11.072781 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:11.072793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:11.092411 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:11.092531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:11.173959 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:11.173986 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:11.174000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:11.204098 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:11.204130 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:11.265126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:11.265169 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:11.329309 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:11.329350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:11.366487 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:11.366516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:11.449439 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:11.449474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:11.493614 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:11.493648 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.530111 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:11.530142 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:11.573692 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:11.573724 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.175120 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:14.187102 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:14.187212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:14.217900 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.217923 1225677 cri.go:89] found id: ""
	I1217 01:33:14.217933 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:14.217993 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.228556 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:14.228632 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:14.256615 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.256694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.256731 1225677 cri.go:89] found id: ""
	I1217 01:33:14.256747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:14.256855 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.260873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.264886 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:14.264982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:14.293944 1225677 cri.go:89] found id: ""
	I1217 01:33:14.294012 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.294036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:14.294057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:14.294149 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:14.322566 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.322586 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.322591 1225677 cri.go:89] found id: ""
	I1217 01:33:14.322599 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:14.322693 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.326575 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.330162 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:14.330237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:14.356466 1225677 cri.go:89] found id: ""
	I1217 01:33:14.356491 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.356500 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:14.356506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:14.356566 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:14.386031 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:14.386055 1225677 cri.go:89] found id: ""
	I1217 01:33:14.386064 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:14.386142 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.390030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:14.390110 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:14.416257 1225677 cri.go:89] found id: ""
	I1217 01:33:14.416284 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.416293 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:14.416303 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:14.416317 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.511192 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:14.511232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:14.604109 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:14.604132 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:14.604148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.656861 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:14.656895 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.685614 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:14.685642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:14.764169 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:14.764208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:14.812699 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:14.812730 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:14.831513 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:14.831547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.858309 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:14.858339 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.909041 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:14.909072 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.975681 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:14.975723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.515279 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:17.540730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:17.540806 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:17.570081 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:17.570102 1225677 cri.go:89] found id: ""
	I1217 01:33:17.570110 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:17.570178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.574399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:17.574471 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:17.599589 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:17.599610 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:17.599614 1225677 cri.go:89] found id: ""
	I1217 01:33:17.599622 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:17.599689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.604570 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.608574 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:17.608645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:17.635229 1225677 cri.go:89] found id: ""
	I1217 01:33:17.635306 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.635329 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:17.635350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:17.635422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:17.668964 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:17.669003 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.669009 1225677 cri.go:89] found id: ""
	I1217 01:33:17.669017 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:17.669103 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.673057 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.677753 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:17.677826 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:17.707206 1225677 cri.go:89] found id: ""
	I1217 01:33:17.707245 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.707255 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:17.707261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:17.707325 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:17.740289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.740313 1225677 cri.go:89] found id: ""
	I1217 01:33:17.740322 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:17.740385 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.744409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:17.744515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:17.771770 1225677 cri.go:89] found id: ""
	I1217 01:33:17.771797 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.771806 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:17.771815 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:17.771828 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.800155 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:17.800190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:17.882443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:17.882481 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:17.935750 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:17.935781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:17.954392 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:17.954425 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:18.031535 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:18.031568 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:18.031585 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:18.079987 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:18.080029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:18.108390 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:18.108454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:18.206148 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:18.206190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:18.238865 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:18.238894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:18.280200 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:18.280236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.844541 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:20.855183 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:20.855255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:20.883645 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:20.883666 1225677 cri.go:89] found id: ""
	I1217 01:33:20.883673 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:20.883731 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.888021 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:20.888094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:20.917299 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:20.917325 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:20.917330 1225677 cri.go:89] found id: ""
	I1217 01:33:20.917338 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:20.917397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.921256 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.925997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:20.926069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:20.952872 1225677 cri.go:89] found id: ""
	I1217 01:33:20.952898 1225677 logs.go:282] 0 containers: []
	W1217 01:33:20.952907 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:20.952913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:20.952970 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:20.979961 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.979983 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:20.979989 1225677 cri.go:89] found id: ""
	I1217 01:33:20.979998 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:20.980064 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.984302 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.989098 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:20.989171 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:21.023299 1225677 cri.go:89] found id: ""
	I1217 01:33:21.023365 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.023382 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:21.023390 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:21.023454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:21.052742 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.052763 1225677 cri.go:89] found id: ""
	I1217 01:33:21.052773 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:21.052830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:21.056774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:21.056847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:21.086360 1225677 cri.go:89] found id: ""
	I1217 01:33:21.086382 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.086391 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:21.086399 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:21.086411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:21.114471 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:21.114500 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:21.213416 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:21.213451 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:21.294188 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:21.294212 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:21.294253 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:21.321989 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:21.322022 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:21.361898 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:21.361940 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:21.415113 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:21.415151 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.443169 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:21.443202 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:21.538356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:21.538403 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:21.584226 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:21.584255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:21.602588 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:21.602625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.196991 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:24.207442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:24.207518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:24.243683 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.243708 1225677 cri.go:89] found id: ""
	I1217 01:33:24.243717 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:24.243772 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.247370 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:24.247444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:24.274124 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.274153 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.274159 1225677 cri.go:89] found id: ""
	I1217 01:33:24.274167 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:24.274224 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.277936 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.281546 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:24.281628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:24.310864 1225677 cri.go:89] found id: ""
	I1217 01:33:24.310893 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.310903 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:24.310910 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:24.310968 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:24.342620 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.342643 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.342648 1225677 cri.go:89] found id: ""
	I1217 01:33:24.342656 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:24.342714 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.346873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.350690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:24.350776 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:24.378447 1225677 cri.go:89] found id: ""
	I1217 01:33:24.378476 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.378486 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:24.378510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:24.378592 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:24.410097 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.410122 1225677 cri.go:89] found id: ""
	I1217 01:33:24.410132 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:24.410193 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.414020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:24.414094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:24.440741 1225677 cri.go:89] found id: ""
	I1217 01:33:24.440825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.440851 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:24.440879 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:24.440912 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:24.460132 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:24.460163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.493812 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:24.493842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.536741 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:24.536777 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.597219 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:24.597260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.663765 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:24.663805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.703808 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:24.703840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:24.784250 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:24.784288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:24.883741 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:24.883779 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:24.962818 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:24.962842 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:24.962856 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.994828 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:24.994858 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:27.546732 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:27.564740 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:27.564805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:27.608525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:27.608549 1225677 cri.go:89] found id: ""
	I1217 01:33:27.608558 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:27.608611 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.613062 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:27.613135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:27.659805 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:27.659827 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:27.659831 1225677 cri.go:89] found id: ""
	I1217 01:33:27.659838 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:27.659896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.664210 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.668351 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:27.668446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:27.704696 1225677 cri.go:89] found id: ""
	I1217 01:33:27.704771 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.704794 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:27.704815 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:27.704898 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:27.738798 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:27.738821 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:27.738827 1225677 cri.go:89] found id: ""
	I1217 01:33:27.738834 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:27.738896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.743026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.746985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:27.747059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:27.785087 1225677 cri.go:89] found id: ""
	I1217 01:33:27.785111 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.785119 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:27.785126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:27.785192 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:27.818270 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:27.818289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:27.818293 1225677 cri.go:89] found id: ""
	I1217 01:33:27.818300 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:27.818356 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.822652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.826638 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:27.826695 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:27.865573 1225677 cri.go:89] found id: ""
	I1217 01:33:27.865604 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.865613 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:27.865623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:27.865634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:27.972193 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:27.972232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:28.056562 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:28.056589 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:28.056605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:28.085398 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:28.085429 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:28.132214 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:28.132252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:28.174271 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:28.174303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:28.273045 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:28.273082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:28.321799 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:28.321880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:28.342146 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:28.342292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:28.406933 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:28.407120 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:28.498600 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:28.498680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:28.534124 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:28.534150 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.091052 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:31.103205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:31.103279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:31.140533 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.140556 1225677 cri.go:89] found id: ""
	I1217 01:33:31.140564 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:31.140627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.145121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:31.145202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:31.175735 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.175761 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.175768 1225677 cri.go:89] found id: ""
	I1217 01:33:31.175775 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:31.175832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.180026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.184555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:31.184628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:31.213074 1225677 cri.go:89] found id: ""
	I1217 01:33:31.213100 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.213110 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:31.213117 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:31.213174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:31.251260 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.251286 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.251291 1225677 cri.go:89] found id: ""
	I1217 01:33:31.251299 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:31.251354 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.255625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.259649 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:31.259726 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:31.287030 1225677 cri.go:89] found id: ""
	I1217 01:33:31.287056 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.287065 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:31.287072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:31.287128 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:31.314782 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.314851 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.314876 1225677 cri.go:89] found id: ""
	I1217 01:33:31.314902 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:31.314984 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.320071 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.324354 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:31.324534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:31.357412 1225677 cri.go:89] found id: ""
	I1217 01:33:31.357439 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.357449 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:31.357464 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:31.357480 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:31.462967 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:31.463006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:31.482965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:31.482995 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:31.552928 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:31.552952 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:31.552966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.579435 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:31.579470 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.619907 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:31.619945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.687595 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:31.687636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.720143 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:31.720175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.746106 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:31.746135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.812096 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:31.812131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.841610 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:31.841646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:31.920159 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:31.920197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:34.457713 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:34.469492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:34.469574 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:34.497755 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:34.497777 1225677 cri.go:89] found id: ""
	I1217 01:33:34.497786 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:34.497850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.501620 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:34.501703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:34.532206 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:34.532227 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:34.532231 1225677 cri.go:89] found id: ""
	I1217 01:33:34.532238 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:34.532299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.537376 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.541069 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:34.541142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:34.577690 1225677 cri.go:89] found id: ""
	I1217 01:33:34.577730 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.577740 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:34.577763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:34.577844 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:34.606156 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.606176 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:34.606180 1225677 cri.go:89] found id: ""
	I1217 01:33:34.606188 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:34.606243 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.610716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.614894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:34.614990 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:34.644563 1225677 cri.go:89] found id: ""
	I1217 01:33:34.644590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.644599 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:34.644605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:34.644685 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:34.673641 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:34.673666 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:34.673671 1225677 cri.go:89] found id: ""
	I1217 01:33:34.673679 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:34.673737 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.677531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.681295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:34.681370 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:34.708990 1225677 cri.go:89] found id: ""
	I1217 01:33:34.709071 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.709088 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:34.709099 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:34.709111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:34.809701 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:34.809785 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:34.828178 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:34.828210 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:34.903131 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:34.903155 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:34.903168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.971266 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:34.971304 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:35.004179 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:35.004215 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:35.041784 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:35.041815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:35.067541 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:35.067571 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:35.126841 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:35.126874 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:35.172191 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:35.172226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:35.200255 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:35.200295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:35.239991 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:35.240030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:37.824762 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:37.835623 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:37.835693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:37.865989 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:37.866008 1225677 cri.go:89] found id: ""
	I1217 01:33:37.866018 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:37.866073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.869857 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:37.869946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:37.898865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:37.898940 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:37.898960 1225677 cri.go:89] found id: ""
	I1217 01:33:37.898986 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:37.899093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.903232 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.907211 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:37.907281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:37.939280 1225677 cri.go:89] found id: ""
	I1217 01:33:37.939302 1225677 logs.go:282] 0 containers: []
	W1217 01:33:37.939311 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:37.939318 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:37.939379 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:37.967924 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:37.967945 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:37.967949 1225677 cri.go:89] found id: ""
	I1217 01:33:37.967957 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:37.968032 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.971797 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.975432 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:37.975510 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:38.007766 1225677 cri.go:89] found id: ""
	I1217 01:33:38.007790 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.007798 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:38.007805 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:38.007864 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:38.037473 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.037495 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.037503 1225677 cri.go:89] found id: ""
	I1217 01:33:38.037511 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:38.037566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.041569 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.045417 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:38.045524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:38.073829 1225677 cri.go:89] found id: ""
	I1217 01:33:38.073851 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.073860 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:38.073870 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:38.073882 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:38.093728 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:38.093764 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:38.176670 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:38.176690 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:38.176703 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:38.211414 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:38.211443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:38.263725 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:38.263761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:38.309151 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:38.309186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:38.338107 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:38.338143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.369538 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:38.369566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:38.449918 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:38.449954 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:38.542249 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:38.542288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:38.612539 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:38.612617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.642932 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:38.643015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:41.175028 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:41.186849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:41.186921 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:41.230880 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.230955 1225677 cri.go:89] found id: ""
	I1217 01:33:41.230992 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:41.231084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.235480 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:41.235641 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:41.266906 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.266980 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.267014 1225677 cri.go:89] found id: ""
	I1217 01:33:41.267040 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:41.267127 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.271136 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.275105 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:41.275225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:41.306499 1225677 cri.go:89] found id: ""
	I1217 01:33:41.306580 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.306603 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:41.306624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:41.306737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:41.333549 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.333575 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.333580 1225677 cri.go:89] found id: ""
	I1217 01:33:41.333589 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:41.333643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.337497 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.341450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:41.341531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:41.368976 1225677 cri.go:89] found id: ""
	I1217 01:33:41.369004 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.369014 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:41.369020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:41.369082 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:41.397520 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.397583 1225677 cri.go:89] found id: ""
	I1217 01:33:41.397607 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:41.397684 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.401528 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:41.401607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:41.427395 1225677 cri.go:89] found id: ""
	I1217 01:33:41.427423 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.427434 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:41.427444 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:41.427463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:41.525514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:41.525559 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:41.551264 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:41.551299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:41.625083 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:41.625123 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:41.625147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.702454 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:41.702490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.735107 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:41.735134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.769228 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:41.769269 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.799696 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:41.799725 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.848171 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:41.848207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.933395 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:41.933446 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:42.025408 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:42.025452 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:44.562646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:44.573393 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:44.573486 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:44.600868 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:44.600895 1225677 cri.go:89] found id: ""
	I1217 01:33:44.600906 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:44.600983 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.604710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:44.604780 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:44.632082 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:44.632158 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:44.632187 1225677 cri.go:89] found id: ""
	I1217 01:33:44.632208 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:44.632294 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.636315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.640212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:44.640285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:44.669382 1225677 cri.go:89] found id: ""
	I1217 01:33:44.669404 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.669413 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:44.669419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:44.669480 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:44.699713 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:44.699732 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.699737 1225677 cri.go:89] found id: ""
	I1217 01:33:44.699747 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:44.699801 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.703608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.707118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:44.707191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:44.733881 1225677 cri.go:89] found id: ""
	I1217 01:33:44.733905 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.733914 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:44.733921 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:44.733983 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:44.761418 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:44.761440 1225677 cri.go:89] found id: ""
	I1217 01:33:44.761449 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:44.761507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.765368 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:44.765451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:44.797562 1225677 cri.go:89] found id: ""
	I1217 01:33:44.797587 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.797595 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:44.797605 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:44.797617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.824683 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:44.824716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:44.935133 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:44.935177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:44.954652 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:44.954684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:45.015678 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:45.015775 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:45.189553 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:45.191524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:45.273264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:45.273306 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:45.371974 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:45.372013 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:45.409119 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:45.409149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:45.483606 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:45.483631 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:45.483645 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:45.511796 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:45.511826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.069605 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:48.081402 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:48.081501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:48.113467 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.113487 1225677 cri.go:89] found id: ""
	I1217 01:33:48.113496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:48.113554 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.123702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:48.123830 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:48.152225 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.152299 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.152320 1225677 cri.go:89] found id: ""
	I1217 01:33:48.152346 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:48.152452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.156596 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.160848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:48.160930 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:48.192903 1225677 cri.go:89] found id: ""
	I1217 01:33:48.192934 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.192944 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:48.192951 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:48.193016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:48.223459 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.223483 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.223489 1225677 cri.go:89] found id: ""
	I1217 01:33:48.223496 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:48.223577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.228708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.233033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:48.233131 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:48.264313 1225677 cri.go:89] found id: ""
	I1217 01:33:48.264339 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.264348 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:48.264355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:48.264430 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:48.292891 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.292963 1225677 cri.go:89] found id: ""
	I1217 01:33:48.292986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:48.293068 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.297013 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:48.297089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:48.324697 1225677 cri.go:89] found id: ""
	I1217 01:33:48.324724 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.324734 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:48.324743 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:48.324755 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:48.343285 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:48.343318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.401079 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:48.401121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.445651 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:48.445685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.487906 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:48.487936 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.520261 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:48.520288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:48.612095 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:48.612132 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:48.686505 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:48.686528 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:48.686545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.715518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:48.715549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.780723 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:48.780758 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:48.813883 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:48.813910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.424534 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:51.435019 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:51.435089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:51.461515 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.461539 1225677 cri.go:89] found id: ""
	I1217 01:33:51.461549 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:51.461610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.465697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:51.465778 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:51.494232 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.494254 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:51.494260 1225677 cri.go:89] found id: ""
	I1217 01:33:51.494267 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:51.494342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.498178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.501847 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:51.501920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:51.533242 1225677 cri.go:89] found id: ""
	I1217 01:33:51.533267 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.533277 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:51.533283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:51.533356 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:51.559915 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.559937 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:51.559942 1225677 cri.go:89] found id: ""
	I1217 01:33:51.559950 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:51.560017 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.563739 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.567426 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:51.567506 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:51.598933 1225677 cri.go:89] found id: ""
	I1217 01:33:51.598958 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.598978 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:51.598985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:51.599043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:51.628013 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:51.628085 1225677 cri.go:89] found id: ""
	I1217 01:33:51.628107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:51.628195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.632081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:51.632153 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:51.664059 1225677 cri.go:89] found id: ""
	I1217 01:33:51.664095 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.664104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:51.664114 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:51.664127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.703117 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:51.703141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.746864 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:51.746901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.813259 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:51.813294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:51.890408 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:51.890448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.996243 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:51.996281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:52.078355 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:52.078385 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:52.078399 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:52.124157 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:52.124201 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:52.158325 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:52.158406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:52.194882 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:52.194917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:52.236180 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:52.236223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:54.755766 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:54.766584 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:54.766659 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:54.794813 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:54.794834 1225677 cri.go:89] found id: ""
	I1217 01:33:54.794844 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:54.794900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.798697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:54.798816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:54.830345 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:54.830368 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:54.830374 1225677 cri.go:89] found id: ""
	I1217 01:33:54.830381 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:54.830437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.834212 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.837869 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:54.837958 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:54.865687 1225677 cri.go:89] found id: ""
	I1217 01:33:54.865710 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.865720 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:54.865726 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:54.865784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:54.893199 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:54.893222 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:54.893228 1225677 cri.go:89] found id: ""
	I1217 01:33:54.893236 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:54.893300 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.897296 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.901035 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:54.901109 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:54.935123 1225677 cri.go:89] found id: ""
	I1217 01:33:54.935150 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.935160 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:54.935165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:54.935227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:54.960828 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:54.960908 1225677 cri.go:89] found id: ""
	I1217 01:33:54.960925 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:54.960994 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.965788 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:54.965858 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:54.996816 1225677 cri.go:89] found id: ""
	I1217 01:33:54.996844 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.996854 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:54.996864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:54.996877 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:55.049187 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:55.049226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:55.122184 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:55.122224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:55.149525 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:55.149555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:55.259828 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:55.259866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:55.286876 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:55.286905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:55.332115 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:55.332149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:55.359308 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:55.359340 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:55.444861 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:55.444901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:55.492994 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:55.493026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:55.512281 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:55.512312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:55.587576 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.089262 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:58.101573 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:58.101658 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:58.137991 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:58.138015 1225677 cri.go:89] found id: ""
	I1217 01:33:58.138024 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:58.138084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.142504 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:58.142579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:58.172313 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.172337 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.172343 1225677 cri.go:89] found id: ""
	I1217 01:33:58.172350 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:58.172446 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.176396 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.180282 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:58.180366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:58.211138 1225677 cri.go:89] found id: ""
	I1217 01:33:58.211171 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.211181 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:58.211193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:58.211257 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:58.243736 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.243759 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.243764 1225677 cri.go:89] found id: ""
	I1217 01:33:58.243773 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:58.243830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.247791 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.251576 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:58.251655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:58.288139 1225677 cri.go:89] found id: ""
	I1217 01:33:58.288173 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.288184 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:58.288193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:58.288255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:58.317667 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.317690 1225677 cri.go:89] found id: ""
	I1217 01:33:58.317700 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:58.317763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.321820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:58.321906 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:58.350850 1225677 cri.go:89] found id: ""
	I1217 01:33:58.350878 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.350888 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:58.350897 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:58.350910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.416830 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:58.416867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.444837 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:58.444868 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:58.528215 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:58.528263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:58.575846 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:58.575880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:58.595772 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:58.595807 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.650340 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:58.650375 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.701278 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:58.701316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.732779 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:58.732810 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:58.835274 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:58.835310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:58.910122 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.910207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:58.910236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.438103 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:01.448838 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:01.448920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:01.479627 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.479651 1225677 cri.go:89] found id: ""
	I1217 01:34:01.479678 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:01.479736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.483564 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:01.483634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:01.510339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.510364 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.510370 1225677 cri.go:89] found id: ""
	I1217 01:34:01.510378 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:01.510435 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.514437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.519025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:01.519139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:01.547434 1225677 cri.go:89] found id: ""
	I1217 01:34:01.547457 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.547466 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:01.547473 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:01.547530 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:01.574487 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.574508 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.574513 1225677 cri.go:89] found id: ""
	I1217 01:34:01.574520 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:01.574577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.578139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.581545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:01.581626 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:01.609342 1225677 cri.go:89] found id: ""
	I1217 01:34:01.609365 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.609374 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:01.609381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:01.609439 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:01.636506 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:01.636530 1225677 cri.go:89] found id: ""
	I1217 01:34:01.636540 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:01.636602 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.640274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:01.640388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:01.669875 1225677 cri.go:89] found id: ""
	I1217 01:34:01.669944 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.669969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:01.669993 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:01.670033 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.710653 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:01.710691 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.763990 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:01.764028 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.833068 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:01.833107 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.863940 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:01.864023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:01.967213 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:01.967254 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:01.992938 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:01.992972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:02.024381 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:02.024443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:02.106857 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:02.106896 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:02.143612 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:02.143646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:02.213706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:02.213729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:02.213742 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.741826 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:04.752958 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:04.753026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:04.783743 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.783762 1225677 cri.go:89] found id: ""
	I1217 01:34:04.783770 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:04.784150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.788287 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:04.788359 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:04.817040 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:04.817073 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:04.817079 1225677 cri.go:89] found id: ""
	I1217 01:34:04.817086 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:04.817147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.821094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.825495 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:04.825571 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:04.853100 1225677 cri.go:89] found id: ""
	I1217 01:34:04.853124 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.853133 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:04.853140 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:04.853202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:04.881403 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:04.881425 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:04.881430 1225677 cri.go:89] found id: ""
	I1217 01:34:04.881438 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:04.881502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.885516 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.889230 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:04.889353 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:04.915187 1225677 cri.go:89] found id: ""
	I1217 01:34:04.915219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.915229 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:04.915235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:04.915296 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:04.946769 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:04.946802 1225677 cri.go:89] found id: ""
	I1217 01:34:04.946811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:04.946884 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.951231 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:04.951339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:04.978082 1225677 cri.go:89] found id: ""
	I1217 01:34:04.978110 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.978120 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:04.978128 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:04.978166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:05.019076 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:05.019109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:05.101083 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:05.101161 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:05.177848 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:05.177870 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:05.177884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:05.204143 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:05.204172 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:05.268231 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:05.268268 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:05.297025 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:05.297054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:05.327881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:05.327911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:05.437319 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:05.437360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:05.456847 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:05.456883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:05.498209 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:05.498242 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.077748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:08.088818 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:08.088890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:08.126181 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.126213 1225677 cri.go:89] found id: ""
	I1217 01:34:08.126227 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:08.126292 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.131226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:08.131346 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:08.160808 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.160832 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.160837 1225677 cri.go:89] found id: ""
	I1217 01:34:08.160846 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:08.160923 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.166045 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.170405 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:08.170497 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:08.200928 1225677 cri.go:89] found id: ""
	I1217 01:34:08.200954 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.200964 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:08.200970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:08.201068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:08.237681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.237706 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.237711 1225677 cri.go:89] found id: ""
	I1217 01:34:08.237719 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:08.237794 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.241696 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.245486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:08.245561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:08.272543 1225677 cri.go:89] found id: ""
	I1217 01:34:08.272572 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.272582 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:08.272594 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:08.272676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:08.304603 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.304627 1225677 cri.go:89] found id: ""
	I1217 01:34:08.304635 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:08.304690 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.308617 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:08.308691 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:08.338781 1225677 cri.go:89] found id: ""
	I1217 01:34:08.338809 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.338818 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:08.338827 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:08.338839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:08.374627 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:08.374660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:08.472485 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:08.472523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:08.490991 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:08.491026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.574253 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:08.574292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.602049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:08.602118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:08.681328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:08.681348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:08.681361 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.708974 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:08.709000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.761284 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:08.761320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.819965 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:08.820006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.850377 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:08.850405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:11.432699 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:11.444142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:11.444218 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:11.477380 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.477404 1225677 cri.go:89] found id: ""
	I1217 01:34:11.477414 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:11.477475 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.481941 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:11.482014 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:11.510503 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.510529 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.510546 1225677 cri.go:89] found id: ""
	I1217 01:34:11.510554 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:11.510650 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.514842 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.518923 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:11.519013 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:11.546962 1225677 cri.go:89] found id: ""
	I1217 01:34:11.546990 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.547000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:11.547006 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:11.547080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:11.574757 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:11.574782 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:11.574787 1225677 cri.go:89] found id: ""
	I1217 01:34:11.574796 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:11.574877 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.579088 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.583273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:11.583402 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:11.613215 1225677 cri.go:89] found id: ""
	I1217 01:34:11.613244 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.613254 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:11.613261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:11.613326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:11.642127 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:11.642166 1225677 cri.go:89] found id: ""
	I1217 01:34:11.642175 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:11.642249 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.646180 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:11.646281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:11.676821 1225677 cri.go:89] found id: ""
	I1217 01:34:11.676848 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.676858 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:11.676868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:11.676880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:11.776881 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:11.776922 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:11.797665 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:11.797700 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:11.873871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:11.873895 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:11.873909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.901431 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:11.901461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.946983 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:11.947021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.993263 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:11.993299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:12.069104 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:12.069143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:12.101484 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:12.101511 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:12.137373 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:12.137404 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:12.219779 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:12.219833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:14.749747 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:14.760900 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:14.760971 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:14.789422 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:14.789504 1225677 cri.go:89] found id: ""
	I1217 01:34:14.789520 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:14.789579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.794016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:14.794094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:14.820779 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:14.820802 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:14.820808 1225677 cri.go:89] found id: ""
	I1217 01:34:14.820815 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:14.820892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.824759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.828502 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:14.828620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:14.855015 1225677 cri.go:89] found id: ""
	I1217 01:34:14.855042 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.855051 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:14.855058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:14.855118 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:14.882554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:14.882580 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:14.882586 1225677 cri.go:89] found id: ""
	I1217 01:34:14.882594 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:14.882649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.886723 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.890383 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:14.890487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:14.921014 1225677 cri.go:89] found id: ""
	I1217 01:34:14.921051 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.921077 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:14.921096 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:14.921186 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:14.950121 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:14.950151 1225677 cri.go:89] found id: ""
	I1217 01:34:14.950160 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:14.950235 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.954391 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:14.954491 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:14.981305 1225677 cri.go:89] found id: ""
	I1217 01:34:14.981381 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.981396 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:14.981406 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:14.981424 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:15.082515 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:15.082601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:15.115676 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:15.115766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:15.207150 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:15.207196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:15.253067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:15.253103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:15.282406 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:15.282434 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:15.332186 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:15.332232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:15.383617 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:15.383653 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:15.413724 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:15.413761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:15.512500 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:15.512539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:15.531712 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:15.531744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:15.607024 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.107382 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:18.125209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:18.125300 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:18.154715 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.154743 1225677 cri.go:89] found id: ""
	I1217 01:34:18.154759 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:18.154827 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.158989 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:18.159058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:18.186887 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.186906 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.186910 1225677 cri.go:89] found id: ""
	I1217 01:34:18.186918 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:18.186974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.191114 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.195016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:18.195088 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:18.230496 1225677 cri.go:89] found id: ""
	I1217 01:34:18.230522 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.230532 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:18.230541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:18.230603 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:18.257433 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.257453 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.257458 1225677 cri.go:89] found id: ""
	I1217 01:34:18.257466 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:18.257522 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.261223 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.264998 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:18.265077 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:18.298281 1225677 cri.go:89] found id: ""
	I1217 01:34:18.298359 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.298373 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:18.298381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:18.298438 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:18.326008 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:18.326029 1225677 cri.go:89] found id: ""
	I1217 01:34:18.326038 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:18.326094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.329952 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:18.330026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:18.355880 1225677 cri.go:89] found id: ""
	I1217 01:34:18.355914 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.355924 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:18.355956 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:18.355971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:18.430677 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:18.430716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:18.461146 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:18.461178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:18.483944 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:18.483976 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:18.558884 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.558914 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:18.558930 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.631593 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:18.631631 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.661399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:18.661431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:18.765933 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:18.765971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.798005 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:18.798035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.838207 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:18.838245 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.879939 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:18.879973 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.409362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:21.420285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:21.420355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:21.450399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:21.450424 1225677 cri.go:89] found id: ""
	I1217 01:34:21.450433 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:21.450488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.454541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:21.454613 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:21.484061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.484086 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:21.484091 1225677 cri.go:89] found id: ""
	I1217 01:34:21.484099 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:21.484156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.488024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.491648 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:21.491718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:21.522026 1225677 cri.go:89] found id: ""
	I1217 01:34:21.522052 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.522062 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:21.522071 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:21.522139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:21.554855 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.554887 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:21.554894 1225677 cri.go:89] found id: ""
	I1217 01:34:21.554902 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:21.554955 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.558520 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.562302 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:21.562407 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:21.590541 1225677 cri.go:89] found id: ""
	I1217 01:34:21.590564 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.590574 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:21.590580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:21.590636 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:21.626269 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.626340 1225677 cri.go:89] found id: ""
	I1217 01:34:21.626366 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:21.626428 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.630350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:21.630464 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:21.666471 1225677 cri.go:89] found id: ""
	I1217 01:34:21.666498 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.666507 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:21.666516 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:21.666533 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.706780 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:21.706815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.774693 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:21.774729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:21.861669 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:21.861713 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:21.977061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:21.977096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:22.003122 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:22.003171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:22.051916 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:22.051957 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:22.082713 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:22.082746 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:22.116010 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:22.116037 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:22.146809 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:22.146848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:22.228639 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:22.228703 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:22.228732 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.754744 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:24.765436 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:24.765518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:24.794628 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.794658 1225677 cri.go:89] found id: ""
	I1217 01:34:24.794667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:24.794732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.798378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:24.798454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:24.832756 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:24.832781 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:24.832787 1225677 cri.go:89] found id: ""
	I1217 01:34:24.832794 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:24.832850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.836854 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.840412 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:24.840572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:24.868168 1225677 cri.go:89] found id: ""
	I1217 01:34:24.868247 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.868270 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:24.868290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:24.868381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:24.899805 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:24.899825 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:24.899830 1225677 cri.go:89] found id: ""
	I1217 01:34:24.899838 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:24.899893 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.903464 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.906950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:24.907067 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:24.935718 1225677 cri.go:89] found id: ""
	I1217 01:34:24.935744 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.935753 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:24.935760 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:24.935818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:24.967779 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:24.967802 1225677 cri.go:89] found id: ""
	I1217 01:34:24.967811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:24.967863 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.971468 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:24.971534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:25.001724 1225677 cri.go:89] found id: ""
	I1217 01:34:25.001815 1225677 logs.go:282] 0 containers: []
	W1217 01:34:25.001842 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:25.001890 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:25.001925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:25.023512 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:25.023709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:25.051815 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:25.051848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:25.099451 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:25.099487 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:25.141801 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:25.141832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:25.178412 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:25.178444 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:25.285631 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:25.285667 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:25.362578 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:25.362602 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:25.362617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:25.403014 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:25.403050 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:25.510336 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:25.510395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:25.543551 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:25.543582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.129531 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:28.140763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:28.140832 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:28.184591 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.184616 1225677 cri.go:89] found id: ""
	I1217 01:34:28.184624 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:28.184707 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.188557 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:28.188634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:28.222629 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.222651 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.222656 1225677 cri.go:89] found id: ""
	I1217 01:34:28.222664 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:28.222724 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.226610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.230481 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:28.230575 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:28.257099 1225677 cri.go:89] found id: ""
	I1217 01:34:28.257126 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.257135 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:28.257142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:28.257220 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:28.291310 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:28.291347 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.291354 1225677 cri.go:89] found id: ""
	I1217 01:34:28.291388 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:28.291469 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.295342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.298970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:28.299075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:28.329122 1225677 cri.go:89] found id: ""
	I1217 01:34:28.329146 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.329155 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:28.329182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:28.329254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:28.359713 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.359736 1225677 cri.go:89] found id: ""
	I1217 01:34:28.359745 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:28.359803 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.363561 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:28.363633 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:28.397883 1225677 cri.go:89] found id: ""
	I1217 01:34:28.397910 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.397920 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:28.397929 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:28.397941 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.431945 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:28.431974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:28.482268 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:28.482300 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.509035 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:28.509067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.557586 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:28.557623 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.616155 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:28.616203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.647557 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:28.647590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.723102 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:28.723139 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:28.830255 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:28.830293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:28.849322 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:28.849355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:28.919883 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:28.919905 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:28.919926 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.492801 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:31.504000 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:31.504075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:31.539143 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.539163 1225677 cri.go:89] found id: ""
	I1217 01:34:31.539173 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:31.539228 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.543277 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:31.543355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:31.573251 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:31.573271 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:31.573275 1225677 cri.go:89] found id: ""
	I1217 01:34:31.573284 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:31.573337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.577458 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.581377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:31.581451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:31.612241 1225677 cri.go:89] found id: ""
	I1217 01:34:31.612270 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.612280 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:31.612286 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:31.612345 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:31.643539 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.643563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.643569 1225677 cri.go:89] found id: ""
	I1217 01:34:31.643578 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:31.643638 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.647841 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.651771 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:31.651855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:31.685384 1225677 cri.go:89] found id: ""
	I1217 01:34:31.685409 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.685418 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:31.685425 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:31.685487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:31.713458 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.713491 1225677 cri.go:89] found id: ""
	I1217 01:34:31.713501 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:31.713571 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.717510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:31.717598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:31.742954 1225677 cri.go:89] found id: ""
	I1217 01:34:31.742979 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.742989 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:31.742998 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:31.743030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:31.826689 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:31.826712 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:31.826726 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.858359 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:31.858389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.890466 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:31.890494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.920394 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:31.920516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:31.954114 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:31.954143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:32.048397 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:32.048463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:32.068978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:32.069014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:32.126891 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:32.126931 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:32.194493 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:32.194531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:32.278811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:32.278854 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:34.866004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:34.876932 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:34.877040 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:34.904525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:34.904548 1225677 cri.go:89] found id: ""
	I1217 01:34:34.904556 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:34.904634 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.908290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:34.908388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:34.937927 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:34.937962 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:34.937967 1225677 cri.go:89] found id: ""
	I1217 01:34:34.937975 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:34.938053 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.941844 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.945447 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:34.945529 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:34.974834 1225677 cri.go:89] found id: ""
	I1217 01:34:34.974860 1225677 logs.go:282] 0 containers: []
	W1217 01:34:34.974870 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:34.974876 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:34.974932 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:35.015100 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.015121 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.015126 1225677 cri.go:89] found id: ""
	I1217 01:34:35.015134 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:35.015196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.019378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.023124 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:35.023202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:35.055461 1225677 cri.go:89] found id: ""
	I1217 01:34:35.055488 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.055497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:35.055503 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:35.055561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:35.083009 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.083083 1225677 cri.go:89] found id: ""
	I1217 01:34:35.083107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:35.083195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.087719 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:35.087788 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:35.115588 1225677 cri.go:89] found id: ""
	I1217 01:34:35.115615 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.115625 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:35.115649 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:35.115664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:35.165942 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:35.165978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.194775 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:35.194803 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:35.291776 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:35.291811 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:35.338079 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:35.338110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:35.357793 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:35.357824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:35.428871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:35.428893 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:35.428905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.499513 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:35.499548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.540136 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:35.540211 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:35.636873 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:35.636913 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:35.665818 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:35.665889 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.220553 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:38.231749 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:38.231823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:38.259479 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.259500 1225677 cri.go:89] found id: ""
	I1217 01:34:38.259509 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:38.259568 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.263241 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:38.263385 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:38.295256 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.295292 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.295301 1225677 cri.go:89] found id: ""
	I1217 01:34:38.295310 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:38.295378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.300468 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.305174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:38.305294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:38.339161 1225677 cri.go:89] found id: ""
	I1217 01:34:38.339194 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.339204 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:38.339210 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:38.339275 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:38.367494 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.367518 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:38.367524 1225677 cri.go:89] found id: ""
	I1217 01:34:38.367531 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:38.367608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.371441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.375084 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:38.375191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:38.401755 1225677 cri.go:89] found id: ""
	I1217 01:34:38.401784 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.401795 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:38.401801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:38.401890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:38.429928 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.429962 1225677 cri.go:89] found id: ""
	I1217 01:34:38.429971 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:38.430044 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.433894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:38.433965 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:38.461088 1225677 cri.go:89] found id: ""
	I1217 01:34:38.461114 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.461124 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:38.461133 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:38.461144 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:38.544237 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:38.544274 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.574281 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:38.574312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.620093 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:38.620131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.674826 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:38.674902 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.752562 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:38.752603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.781494 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:38.781527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:38.833674 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:38.833706 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:38.933793 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:38.933832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:38.953733 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:38.953782 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:39.029298 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:39.029322 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:39.029336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.557003 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:41.568311 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:41.568412 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:41.601070 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:41.601089 1225677 cri.go:89] found id: ""
	I1217 01:34:41.601097 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:41.601156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.605150 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:41.605227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:41.633863 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:41.633887 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:41.633893 1225677 cri.go:89] found id: ""
	I1217 01:34:41.633901 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:41.633958 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.638555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.644087 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:41.644168 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:41.684237 1225677 cri.go:89] found id: ""
	I1217 01:34:41.684276 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.684287 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:41.684294 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:41.684371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:41.717925 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:41.717993 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.718016 1225677 cri.go:89] found id: ""
	I1217 01:34:41.718032 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:41.718109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.722478 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.726529 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:41.726607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:41.754525 1225677 cri.go:89] found id: ""
	I1217 01:34:41.754552 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.754562 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:41.754571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:41.754673 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:41.784794 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:41.784860 1225677 cri.go:89] found id: ""
	I1217 01:34:41.784883 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:41.784969 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.788882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:41.788980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:41.825117 1225677 cri.go:89] found id: ""
	I1217 01:34:41.825193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.825216 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:41.825233 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:41.825259 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:41.934154 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:41.934191 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:41.955231 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:41.955263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:42.023779 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:42.023819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:42.054183 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:42.054218 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:42.146898 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:42.147005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:42.249519 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:42.249543 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:42.249557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:42.280803 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:42.280833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:42.327682 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:42.327731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:42.373795 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:42.373832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:42.415409 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:42.415437 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:44.951197 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:44.962939 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:44.963016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:44.996268 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:44.996297 1225677 cri.go:89] found id: ""
	I1217 01:34:44.996306 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:44.996365 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.016281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:45.016367 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:45.152354 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.152375 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.152380 1225677 cri.go:89] found id: ""
	I1217 01:34:45.152389 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:45.152473 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.161519 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.169793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:45.169869 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:45.269649 1225677 cri.go:89] found id: ""
	I1217 01:34:45.269685 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.269696 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:45.269715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:45.269816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:45.322137 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.322210 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:45.322250 1225677 cri.go:89] found id: ""
	I1217 01:34:45.322320 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:45.322406 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.327229 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.331531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:45.331703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:45.362501 1225677 cri.go:89] found id: ""
	I1217 01:34:45.362571 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.362602 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:45.362624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:45.362696 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:45.394160 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.394240 1225677 cri.go:89] found id: ""
	I1217 01:34:45.394258 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:45.394335 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.398315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:45.398397 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:45.426737 1225677 cri.go:89] found id: ""
	I1217 01:34:45.426780 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.426790 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:45.426819 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:45.426839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:45.503383 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:45.503464 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:45.503485 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:45.535637 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:45.535672 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.583362 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:45.583398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.613182 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:45.613214 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:45.695579 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:45.695626 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:45.729534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:45.729563 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:45.826222 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:45.826262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:45.846157 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:45.846195 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.911389 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:45.911426 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.983046 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:45.983084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.519530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:48.530493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:48.530565 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:48.560366 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:48.560471 1225677 cri.go:89] found id: ""
	I1217 01:34:48.560496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:48.560585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.564848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:48.564920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:48.593560 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.593628 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:48.593666 1225677 cri.go:89] found id: ""
	I1217 01:34:48.593696 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:48.593783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.597895 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.601634 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:48.601718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:48.631022 1225677 cri.go:89] found id: ""
	I1217 01:34:48.631048 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.631057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:48.631064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:48.631122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:48.656804 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:48.656829 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.656834 1225677 cri.go:89] found id: ""
	I1217 01:34:48.656841 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:48.656898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.660979 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.664698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:48.664770 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:48.692344 1225677 cri.go:89] found id: ""
	I1217 01:34:48.692372 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.692383 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:48.692389 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:48.692481 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:48.721997 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:48.722020 1225677 cri.go:89] found id: ""
	I1217 01:34:48.722029 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:48.722111 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.726120 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:48.726247 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:48.753313 1225677 cri.go:89] found id: ""
	I1217 01:34:48.753339 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.753349 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:48.753358 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:48.753388 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:48.849435 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:48.849474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:48.870486 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:48.870523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:48.943874 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:48.943904 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:48.943919 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.991171 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:48.991205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:49.020622 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:49.020649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:49.064904 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:49.064942 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:49.143148 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:49.143186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:49.174999 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:49.175086 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:49.209127 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:49.209156 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:49.296275 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:49.296325 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:51.840412 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:51.851134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:51.851204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:51.880791 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:51.880811 1225677 cri.go:89] found id: ""
	I1217 01:34:51.880820 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:51.880879 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.884883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:51.884962 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:51.911511 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:51.911535 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:51.911541 1225677 cri.go:89] found id: ""
	I1217 01:34:51.911549 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:51.911607 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.915352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.918918 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:51.918986 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:51.950127 1225677 cri.go:89] found id: ""
	I1217 01:34:51.950152 1225677 logs.go:282] 0 containers: []
	W1217 01:34:51.950163 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:51.950169 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:51.950266 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:51.978696 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:51.978725 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:51.978731 1225677 cri.go:89] found id: ""
	I1217 01:34:51.978738 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:51.978795 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.982736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.986411 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:51.986482 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:52.016886 1225677 cri.go:89] found id: ""
	I1217 01:34:52.016911 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.016920 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:52.016926 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:52.016989 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:52.045870 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.045895 1225677 cri.go:89] found id: ""
	I1217 01:34:52.045904 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:52.045962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:52.049906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:52.049977 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:52.077565 1225677 cri.go:89] found id: ""
	I1217 01:34:52.077592 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.077604 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:52.077614 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:52.077646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:52.105176 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:52.105205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:52.211964 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:52.211999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:52.252350 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:52.252382 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:52.306053 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:52.306088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:52.376262 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:52.376302 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:52.403480 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:52.403508 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.431952 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:52.431983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:52.510953 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:52.510990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:52.555450 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:52.555482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:52.574086 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:52.574119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:52.644412 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.144646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:55.155615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:55.155693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:55.184697 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.184716 1225677 cri.go:89] found id: ""
	I1217 01:34:55.184724 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:55.184781 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.188462 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:55.188538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:55.217937 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.217961 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.217966 1225677 cri.go:89] found id: ""
	I1217 01:34:55.217974 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:55.218030 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.221924 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.226643 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:55.226714 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:55.254617 1225677 cri.go:89] found id: ""
	I1217 01:34:55.254645 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.254655 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:55.254662 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:55.254721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:55.282393 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.282419 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.282424 1225677 cri.go:89] found id: ""
	I1217 01:34:55.282432 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:55.282485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.286357 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.289912 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:55.289992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:55.316252 1225677 cri.go:89] found id: ""
	I1217 01:34:55.316278 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.316288 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:55.316295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:55.316368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:55.343249 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.343314 1225677 cri.go:89] found id: ""
	I1217 01:34:55.343337 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:55.343433 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.347319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:55.347448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:55.381545 1225677 cri.go:89] found id: ""
	I1217 01:34:55.381629 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.381645 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:55.381656 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:55.381669 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.421981 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:55.422014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.453301 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:55.453342 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.480646 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:55.480687 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:55.570826 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.570849 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:55.570863 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.599216 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:55.599257 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.658218 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:55.658310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.745919 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:55.745955 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:55.838064 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:55.838101 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:55.888374 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:55.888405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:55.996293 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:55.996331 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:58.522397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:58.536202 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:58.536271 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:58.566870 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:58.566965 1225677 cri.go:89] found id: ""
	I1217 01:34:58.566994 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:58.567139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.571283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:58.571363 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:58.598180 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:58.598208 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.598213 1225677 cri.go:89] found id: ""
	I1217 01:34:58.598222 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:58.598297 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.602201 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.605913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:58.605997 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:58.636167 1225677 cri.go:89] found id: ""
	I1217 01:34:58.636193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.636202 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:58.636209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:58.636270 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:58.662111 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:58.662135 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:58.662140 1225677 cri.go:89] found id: ""
	I1217 01:34:58.662148 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:58.662209 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.666315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.670253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:58.670348 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:58.696144 1225677 cri.go:89] found id: ""
	I1217 01:34:58.696219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.696244 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:58.696265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:58.696347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:58.726742 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.726767 1225677 cri.go:89] found id: ""
	I1217 01:34:58.726776 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:58.726832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.730710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:58.730785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:58.759394 1225677 cri.go:89] found id: ""
	I1217 01:34:58.759421 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.759431 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:58.759440 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:58.759454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.817531 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:58.817569 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.847360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:58.847389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:58.929741 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:58.929776 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:58.968951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:58.968982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:59.043218 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:59.043239 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:59.043255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:59.070405 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:59.070431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:59.146784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:59.146829 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:59.179445 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:59.179479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:59.286441 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:59.286479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:59.308412 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:59.308540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.850397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:01.863234 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:01.863368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:01.898442 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:01.898473 1225677 cri.go:89] found id: ""
	I1217 01:35:01.898484 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:01.898577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.903064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:01.903142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:01.936524 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.936547 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:01.936551 1225677 cri.go:89] found id: ""
	I1217 01:35:01.936559 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:01.936625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.942865 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.947963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:01.948071 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:01.979359 1225677 cri.go:89] found id: ""
	I1217 01:35:01.979384 1225677 logs.go:282] 0 containers: []
	W1217 01:35:01.979393 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:01.979399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:01.979466 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:02.012882 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.012925 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.012931 1225677 cri.go:89] found id: ""
	I1217 01:35:02.012975 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:02.013055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.017605 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.021797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:02.021870 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:02.049550 1225677 cri.go:89] found id: ""
	I1217 01:35:02.049621 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.049638 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:02.049646 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:02.049722 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:02.081301 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.081326 1225677 cri.go:89] found id: ""
	I1217 01:35:02.081335 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:02.081392 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.086118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:02.086210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:02.125352 1225677 cri.go:89] found id: ""
	I1217 01:35:02.125374 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.125383 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:02.125393 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:02.125405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:02.197255 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:02.197318 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:02.197355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:02.226446 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:02.226488 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:02.271257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:02.271293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:02.314955 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:02.314988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.386430 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:02.386468 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.417607 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:02.417682 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.449011 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:02.449041 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:02.551859 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:02.551899 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:02.571928 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:02.571960 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:02.659356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:02.659395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:05.190765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:05.203695 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:05.203771 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:05.238686 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.238707 1225677 cri.go:89] found id: ""
	I1217 01:35:05.238716 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:05.238778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.242613 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:05.242687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:05.272627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.272661 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.272667 1225677 cri.go:89] found id: ""
	I1217 01:35:05.272675 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:05.272757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.277184 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.281337 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:05.281414 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:05.309340 1225677 cri.go:89] found id: ""
	I1217 01:35:05.309361 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.309370 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:05.309377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:05.309437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:05.342268 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.342294 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.342300 1225677 cri.go:89] found id: ""
	I1217 01:35:05.342308 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:05.342394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.346668 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.350724 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:05.350805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:05.378257 1225677 cri.go:89] found id: ""
	I1217 01:35:05.378289 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.378298 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:05.378305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:05.378366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:05.406348 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.406370 1225677 cri.go:89] found id: ""
	I1217 01:35:05.406379 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:05.406455 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.410653 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:05.410724 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:05.441777 1225677 cri.go:89] found id: ""
	I1217 01:35:05.441802 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.441812 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:05.441820 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:05.441832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:05.521081 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:05.521113 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:05.521127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.559491 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:05.559525 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.608690 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:05.608727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.640635 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:05.640666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:05.720771 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:05.720808 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:05.824388 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:05.824427 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.864839 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:05.864871 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.960476 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:05.960520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.992555 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:05.992588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:06.045891 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:06.045925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:08.568611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:08.579598 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:08.579681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:08.607399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.607421 1225677 cri.go:89] found id: ""
	I1217 01:35:08.607430 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:08.607485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.611906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:08.611982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:08.638447 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.638470 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:08.638476 1225677 cri.go:89] found id: ""
	I1217 01:35:08.638484 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:08.638558 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.642337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.646066 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:08.646162 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:08.673000 1225677 cri.go:89] found id: ""
	I1217 01:35:08.673026 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.673036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:08.673042 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:08.673135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:08.701768 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:08.701792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:08.701798 1225677 cri.go:89] found id: ""
	I1217 01:35:08.701806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:08.701892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.705733 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.709545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:08.709620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:08.736283 1225677 cri.go:89] found id: ""
	I1217 01:35:08.736309 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.736319 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:08.736325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:08.736383 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:08.763589 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:08.763610 1225677 cri.go:89] found id: ""
	I1217 01:35:08.763618 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:08.763679 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.768008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:08.768157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:08.794921 1225677 cri.go:89] found id: ""
	I1217 01:35:08.794948 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.794957 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:08.794967 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:08.795003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:08.866335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:08.866356 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:08.866371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.894862 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:08.894894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.945712 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:08.945749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:09.030175 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:09.030213 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:09.057626 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:09.057656 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:09.140070 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:09.140109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:09.249646 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:09.249685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:09.269874 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:09.269906 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:09.317090 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:09.317126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:09.346482 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:09.346513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:11.877651 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:11.889575 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:11.889645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:11.917211 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:11.917234 1225677 cri.go:89] found id: ""
	I1217 01:35:11.917243 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:11.917309 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.921144 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:11.921223 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:11.955516 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:11.955536 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:11.955541 1225677 cri.go:89] found id: ""
	I1217 01:35:11.955548 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:11.955604 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.959308 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.962862 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:11.962933 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:11.991261 1225677 cri.go:89] found id: ""
	I1217 01:35:11.991284 1225677 logs.go:282] 0 containers: []
	W1217 01:35:11.991293 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:11.991299 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:11.991366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:12.023452 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.023477 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.023483 1225677 cri.go:89] found id: ""
	I1217 01:35:12.023491 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:12.023581 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.027715 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.031641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:12.031751 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:12.059135 1225677 cri.go:89] found id: ""
	I1217 01:35:12.059211 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.059234 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:12.059255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:12.059343 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:12.092809 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.092830 1225677 cri.go:89] found id: ""
	I1217 01:35:12.092839 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:12.092915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.096814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:12.096963 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:12.132911 1225677 cri.go:89] found id: ""
	I1217 01:35:12.132936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.132946 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:12.132955 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:12.132966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:12.235310 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:12.235346 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:12.255554 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:12.255587 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:12.303522 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:12.303560 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.374998 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:12.375032 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:12.461333 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:12.461371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:12.547450 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:12.547475 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:12.547489 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:12.574864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:12.574892 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:12.619775 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:12.619816 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.649040 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:12.649123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.677296 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:12.677326 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.212228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:15.225138 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:15.225215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:15.259192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.259218 1225677 cri.go:89] found id: ""
	I1217 01:35:15.259228 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:15.259287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.263205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:15.263279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:15.290493 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.290516 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.290521 1225677 cri.go:89] found id: ""
	I1217 01:35:15.290529 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:15.290588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.294490 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.298107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:15.298208 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:15.325021 1225677 cri.go:89] found id: ""
	I1217 01:35:15.325047 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.325057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:15.325063 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:15.325125 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:15.353712 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.353744 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.353750 1225677 cri.go:89] found id: ""
	I1217 01:35:15.353758 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:15.353828 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.357883 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.361729 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:15.361817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:15.389342 1225677 cri.go:89] found id: ""
	I1217 01:35:15.389370 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.389379 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:15.389386 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:15.389449 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:15.418437 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.418470 1225677 cri.go:89] found id: ""
	I1217 01:35:15.418479 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:15.418553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.422466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:15.422548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:15.449297 1225677 cri.go:89] found id: ""
	I1217 01:35:15.449333 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.449343 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:15.449370 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:15.449394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:15.468355 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:15.468385 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.494969 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:15.495005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.543170 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:15.543209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:15.616803 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:15.616829 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:15.616845 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.659996 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:15.660031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.730995 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:15.731034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.758963 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:15.758994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.785562 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:15.785633 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:15.872457 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:15.872494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.904808 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:15.904838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:18.506161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:18.518520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:18.518589 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:18.550949 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.550972 1225677 cri.go:89] found id: ""
	I1217 01:35:18.550982 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:18.551041 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.554800 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:18.554880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:18.582497 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:18.582522 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.582527 1225677 cri.go:89] found id: ""
	I1217 01:35:18.582535 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:18.582594 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.586831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.590486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:18.590560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:18.617401 1225677 cri.go:89] found id: ""
	I1217 01:35:18.617426 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.617436 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:18.617443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:18.617504 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:18.648400 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:18.648458 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.648464 1225677 cri.go:89] found id: ""
	I1217 01:35:18.648472 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:18.648530 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.652380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.655820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:18.655916 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:18.689519 1225677 cri.go:89] found id: ""
	I1217 01:35:18.689544 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.689553 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:18.689560 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:18.689621 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:18.718284 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:18.718306 1225677 cri.go:89] found id: ""
	I1217 01:35:18.718313 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:18.718368 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.722268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:18.722372 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:18.753514 1225677 cri.go:89] found id: ""
	I1217 01:35:18.753542 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.753558 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:18.753567 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:18.753611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:18.771813 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:18.771842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:18.845441 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:18.845463 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:18.845477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.872553 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:18.872582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.922099 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:18.922176 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.950258 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:18.950285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:18.990211 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:18.990241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:19.031127 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:19.031164 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:19.107071 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:19.107109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:19.138299 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:19.138327 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:19.222624 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:19.222660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:21.834640 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:21.845711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:21.845784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:21.895249 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:21.895280 1225677 cri.go:89] found id: ""
	I1217 01:35:21.895292 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:21.895371 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.902322 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:21.902404 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:21.943815 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:21.943857 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:21.943863 1225677 cri.go:89] found id: ""
	I1217 01:35:21.943877 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:21.943963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.949206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.954547 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:21.954640 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:21.988594 1225677 cri.go:89] found id: ""
	I1217 01:35:21.988620 1225677 logs.go:282] 0 containers: []
	W1217 01:35:21.988630 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:21.988636 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:21.988718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:22.024625 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.024646 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.024651 1225677 cri.go:89] found id: ""
	I1217 01:35:22.024660 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:22.024760 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.029143 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.033935 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:22.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:22.067922 1225677 cri.go:89] found id: ""
	I1217 01:35:22.067946 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.067955 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:22.067961 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:22.068020 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:22.097619 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.097641 1225677 cri.go:89] found id: ""
	I1217 01:35:22.097649 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:22.097706 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.101692 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:22.101766 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:22.136868 1225677 cri.go:89] found id: ""
	I1217 01:35:22.136891 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.136900 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:22.136911 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:22.136923 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:22.164209 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:22.164236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:22.208399 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:22.208512 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:22.256618 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:22.256650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.287201 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:22.287237 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.314443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:22.314472 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:22.346752 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:22.346780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:22.445530 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:22.445567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:22.464378 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:22.464409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.554715 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:22.554749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:22.659061 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:22.659103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:22.731143 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.231455 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:25.242812 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:25.242949 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:25.280443 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.280470 1225677 cri.go:89] found id: ""
	I1217 01:35:25.280478 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:25.280536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.284885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:25.285008 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:25.313823 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.313846 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.313852 1225677 cri.go:89] found id: ""
	I1217 01:35:25.313859 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:25.313939 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.317952 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.321539 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:25.321620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:25.354565 1225677 cri.go:89] found id: ""
	I1217 01:35:25.354632 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.354656 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:25.354681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:25.354777 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:25.386743 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.386774 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.386779 1225677 cri.go:89] found id: ""
	I1217 01:35:25.386787 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:25.386857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.390671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.394226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:25.394339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:25.421123 1225677 cri.go:89] found id: ""
	I1217 01:35:25.421212 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.421228 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:25.421236 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:25.421310 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:25.448879 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.448904 1225677 cri.go:89] found id: ""
	I1217 01:35:25.448913 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:25.448971 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.452707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:25.452782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:25.479351 1225677 cri.go:89] found id: ""
	I1217 01:35:25.479379 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.479389 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:25.479399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:25.479410 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:25.577317 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:25.577354 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:25.600156 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:25.600203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:25.679524 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.679600 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:25.679621 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.706792 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:25.706824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.764895 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:25.764934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.796158 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:25.796188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.823684 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:25.823721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:25.857273 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:25.857303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.915963 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:25.916003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.992485 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:25.992520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:28.577965 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:28.588733 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:28.588802 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:28.621192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.621211 1225677 cri.go:89] found id: ""
	I1217 01:35:28.621220 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:28.621279 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.625055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:28.625124 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:28.651718 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:28.651738 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.651742 1225677 cri.go:89] found id: ""
	I1217 01:35:28.651749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:28.651807 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.656353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.660550 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:28.660620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:28.688556 1225677 cri.go:89] found id: ""
	I1217 01:35:28.688580 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.688589 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:28.688596 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:28.688654 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:28.716478 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:28.716503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:28.716508 1225677 cri.go:89] found id: ""
	I1217 01:35:28.716516 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:28.716603 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.720442 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.723785 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:28.723862 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:28.750780 1225677 cri.go:89] found id: ""
	I1217 01:35:28.750807 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.750817 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:28.750823 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:28.750882 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:28.777746 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:28.777772 1225677 cri.go:89] found id: ""
	I1217 01:35:28.777781 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:28.777836 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.781586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:28.781707 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:28.812032 1225677 cri.go:89] found id: ""
	I1217 01:35:28.812062 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.812072 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:28.812081 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:28.812115 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:28.910028 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:28.910067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.938533 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:28.938565 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.982530 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:28.982566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:29.059912 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:29.059948 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:29.087417 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:29.087449 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:29.141591 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:29.141622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:29.162662 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:29.162694 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:29.245511 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:29.245537 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:29.245553 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:29.286747 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:29.286784 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:29.317045 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:29.317075 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:31.896935 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:31.908531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:31.908605 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:31.951663 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:31.951684 1225677 cri.go:89] found id: ""
	I1217 01:35:31.951692 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:31.951746 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.956325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:31.956501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:31.990512 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:31.990578 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:31.990598 1225677 cri.go:89] found id: ""
	I1217 01:35:31.990625 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:31.990708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.994957 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.001450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:32.001597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:32.033107 1225677 cri.go:89] found id: ""
	I1217 01:35:32.033136 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.033146 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:32.033153 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:32.033245 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:32.061118 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.061140 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.061145 1225677 cri.go:89] found id: ""
	I1217 01:35:32.061153 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:32.061208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.065195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.068963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:32.069066 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:32.099914 1225677 cri.go:89] found id: ""
	I1217 01:35:32.099941 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.099951 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:32.099957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:32.100018 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:32.134003 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.134028 1225677 cri.go:89] found id: ""
	I1217 01:35:32.134044 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:32.134101 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.138837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:32.138909 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:32.178095 1225677 cri.go:89] found id: ""
	I1217 01:35:32.178168 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.178193 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:32.178210 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:32.178223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:32.219018 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:32.219049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:32.328076 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:32.328182 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:32.347854 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:32.347887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:32.389069 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:32.389143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.464016 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:32.464052 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.492348 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:32.492466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.519965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:32.520035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:32.589420 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:32.589485 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:32.589506 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:32.615780 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:32.615814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:32.668491 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:32.668527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.253556 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:35.266266 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:35.266344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:35.303632 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.303658 1225677 cri.go:89] found id: ""
	I1217 01:35:35.303667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:35.303726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.307439 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:35.307511 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:35.336107 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.336131 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.336136 1225677 cri.go:89] found id: ""
	I1217 01:35:35.336143 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:35.336196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.340106 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.343587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:35.343667 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:35.374453 1225677 cri.go:89] found id: ""
	I1217 01:35:35.374483 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.374492 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:35.374498 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:35.374560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:35.401769 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.401792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.401798 1225677 cri.go:89] found id: ""
	I1217 01:35:35.401806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:35.401860 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.405507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.409182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:35.409254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:35.437191 1225677 cri.go:89] found id: ""
	I1217 01:35:35.437229 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.437280 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:35.437303 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:35.437454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:35.464026 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.464048 1225677 cri.go:89] found id: ""
	I1217 01:35:35.464056 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:35.464113 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.467752 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:35.467854 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:35.495119 1225677 cri.go:89] found id: ""
	I1217 01:35:35.495143 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.495152 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:35.495161 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:35.495173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.538118 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:35.538157 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.612361 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:35.612398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.642424 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:35.642454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.671140 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:35.671168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.753840 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:35.753879 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:35.791176 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:35.791207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:35.861567 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:35.861588 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:35.861604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.887544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:35.887573 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.930868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:35.930901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:36.035955 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:36.035997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.556940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:38.568341 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:38.568410 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:38.602139 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:38.602163 1225677 cri.go:89] found id: ""
	I1217 01:35:38.602172 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:38.602234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.606168 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:38.606244 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:38.636762 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:38.636782 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:38.636787 1225677 cri.go:89] found id: ""
	I1217 01:35:38.636795 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:38.636849 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.640703 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.644870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:38.644980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:38.672028 1225677 cri.go:89] found id: ""
	I1217 01:35:38.672105 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.672130 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:38.672152 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:38.672252 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:38.702063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:38.702088 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:38.702096 1225677 cri.go:89] found id: ""
	I1217 01:35:38.702104 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:38.702189 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.706075 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.710843 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:38.710923 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:38.739176 1225677 cri.go:89] found id: ""
	I1217 01:35:38.739204 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.739214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:38.739221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:38.739281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:38.765721 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:38.765749 1225677 cri.go:89] found id: ""
	I1217 01:35:38.765759 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:38.765835 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.769950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:38.770026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:38.797985 1225677 cri.go:89] found id: ""
	I1217 01:35:38.798013 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.798023 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:38.798033 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:38.798065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:38.898407 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:38.898448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.917886 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:38.917920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:38.999335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:38.999368 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:38.999384 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:39.041692 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:39.041729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:39.089675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:39.089712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:39.172952 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:39.172988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:39.211704 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:39.211736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:39.241891 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:39.241920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:39.276958 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:39.276988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:39.364067 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:39.364119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:41.897002 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:41.908024 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:41.908100 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:41.937482 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:41.937556 1225677 cri.go:89] found id: ""
	I1217 01:35:41.937569 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:41.937630 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.941542 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:41.941611 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:41.987116 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:41.987139 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:41.987145 1225677 cri.go:89] found id: ""
	I1217 01:35:41.987153 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:41.987206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.991091 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.994831 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:41.994905 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:42.033990 1225677 cri.go:89] found id: ""
	I1217 01:35:42.034016 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.034025 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:42.034031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:42.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:42.065878 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:42.065959 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.065980 1225677 cri.go:89] found id: ""
	I1217 01:35:42.066005 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:42.066122 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.071367 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.076378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:42.076531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:42.123414 1225677 cri.go:89] found id: ""
	I1217 01:35:42.123521 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.123583 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:42.123610 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:42.123706 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:42.163210 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.163302 1225677 cri.go:89] found id: ""
	I1217 01:35:42.163328 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:42.163431 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.168650 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:42.168758 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:42.211741 1225677 cri.go:89] found id: ""
	I1217 01:35:42.211767 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.211777 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:42.211787 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:42.211800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:42.252091 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:42.252126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:42.356409 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:42.356465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:42.377129 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:42.377163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:42.449855 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:42.449879 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:42.449893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:42.476498 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:42.476530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:42.518303 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:42.518337 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.548819 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:42.548852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.578811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:42.578840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:42.658356 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:42.658395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:42.700126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:42.700173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.276979 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:45.301570 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:45.301737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:45.339316 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:45.339342 1225677 cri.go:89] found id: ""
	I1217 01:35:45.339351 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:45.339441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.343543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:45.343652 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:45.374479 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.374552 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.374574 1225677 cri.go:89] found id: ""
	I1217 01:35:45.374600 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:45.374672 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.378901 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.382870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:45.382942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:45.413785 1225677 cri.go:89] found id: ""
	I1217 01:35:45.413816 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.413825 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:45.413832 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:45.413894 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:45.446395 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.446417 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.446423 1225677 cri.go:89] found id: ""
	I1217 01:35:45.446431 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:45.446508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.450414 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.454372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:45.454448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:45.483846 1225677 cri.go:89] found id: ""
	I1217 01:35:45.483918 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.483942 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:45.483963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:45.484039 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:45.515890 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.515962 1225677 cri.go:89] found id: ""
	I1217 01:35:45.515986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:45.516060 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.519980 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:45.520107 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:45.548900 1225677 cri.go:89] found id: ""
	I1217 01:35:45.548984 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.549001 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:45.549011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:45.549023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.594641 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:45.594680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.623072 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:45.623171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:45.701558 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:45.701599 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:45.775358 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:45.775423 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:45.775443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.822675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:45.822712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.904212 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:45.904249 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.934553 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:45.934581 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:45.966200 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:45.966231 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:46.073612 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:46.073651 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:46.092826 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:46.092860 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.626362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:48.637081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:48.637157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:48.663951 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.664018 1225677 cri.go:89] found id: ""
	I1217 01:35:48.664045 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:48.664137 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.667889 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:48.668007 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:48.695424 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:48.695498 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:48.695518 1225677 cri.go:89] found id: ""
	I1217 01:35:48.695570 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:48.695667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.699980 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.703779 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:48.703875 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:48.731347 1225677 cri.go:89] found id: ""
	I1217 01:35:48.731372 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.731381 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:48.731388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:48.731448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:48.761776 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:48.761802 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:48.761808 1225677 cri.go:89] found id: ""
	I1217 01:35:48.761816 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:48.761875 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.766072 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.769796 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:48.769871 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:48.799377 1225677 cri.go:89] found id: ""
	I1217 01:35:48.799404 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.799412 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:48.799418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:48.799477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:48.828149 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:48.828173 1225677 cri.go:89] found id: ""
	I1217 01:35:48.828192 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:48.828254 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.832599 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:48.832717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:48.858554 1225677 cri.go:89] found id: ""
	I1217 01:35:48.858587 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.858597 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:48.858626 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:48.858643 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:48.894472 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:48.894502 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:48.969952 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:48.969978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:48.969994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:49.014023 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:49.014058 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:49.092630 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:49.092671 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:49.197053 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:49.197088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:49.225929 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:49.225963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:49.253145 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:49.253174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:49.301391 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:49.301428 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:49.337786 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:49.337819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:49.367000 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:49.367029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:51.942903 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:51.957586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:51.957662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:52.007996 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.008017 1225677 cri.go:89] found id: ""
	I1217 01:35:52.008026 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:52.008082 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.015080 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:52.015148 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:52.052213 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.052249 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.052255 1225677 cri.go:89] found id: ""
	I1217 01:35:52.052262 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:52.052318 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.056182 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.059959 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:52.060033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:52.090239 1225677 cri.go:89] found id: ""
	I1217 01:35:52.090264 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.090274 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:52.090281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:52.090341 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:52.118854 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:52.118874 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.118879 1225677 cri.go:89] found id: ""
	I1217 01:35:52.118886 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:52.118946 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.125093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.128837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:52.128931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:52.157907 1225677 cri.go:89] found id: ""
	I1217 01:35:52.157936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.157945 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:52.157957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:52.158017 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:52.191428 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.191451 1225677 cri.go:89] found id: ""
	I1217 01:35:52.191459 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:52.191543 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.195375 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:52.195456 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:52.224407 1225677 cri.go:89] found id: ""
	I1217 01:35:52.224468 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.224477 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:52.224486 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:52.224498 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.252950 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:52.252981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.279228 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:52.279258 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:52.298974 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:52.299007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:52.370510 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:52.370544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:52.370588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.418893 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:52.418934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:52.499956 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:52.499992 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:52.542158 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:52.542187 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:52.643325 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:52.643367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.671238 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:52.671267 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.712214 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:52.712252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.294635 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:55.305795 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:55.305897 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:55.341120 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.341143 1225677 cri.go:89] found id: ""
	I1217 01:35:55.341152 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:55.341208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.345154 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:55.345236 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:55.376865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.376937 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.376959 1225677 cri.go:89] found id: ""
	I1217 01:35:55.376982 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:55.377065 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.381380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.385355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:55.385472 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:55.412679 1225677 cri.go:89] found id: ""
	I1217 01:35:55.412701 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.412710 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:55.412716 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:55.412773 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:55.439554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.439573 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.439578 1225677 cri.go:89] found id: ""
	I1217 01:35:55.439585 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:55.439639 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.443337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.446737 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:55.446804 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:55.478015 1225677 cri.go:89] found id: ""
	I1217 01:35:55.478039 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.478052 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:55.478065 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:55.478136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:55.503877 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:55.503940 1225677 cri.go:89] found id: ""
	I1217 01:35:55.503964 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:55.504038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.507809 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:55.507880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:55.539899 1225677 cri.go:89] found id: ""
	I1217 01:35:55.539926 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.539935 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:55.539951 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:55.539963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:55.642073 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:55.642111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:55.662102 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:55.662143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.689162 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:55.689192 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.728771 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:55.728804 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.755851 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:55.755878 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:55.839759 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:55.839805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:55.910162 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:55.910183 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:55.910197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.962626 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:55.962664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:56.057075 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:56.057126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:56.095037 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:56.095069 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:58.632280 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:58.643092 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:58.643199 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:58.670245 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:58.670268 1225677 cri.go:89] found id: ""
	I1217 01:35:58.670277 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:58.670332 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.673988 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:58.674059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:58.706113 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:58.706135 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:58.706140 1225677 cri.go:89] found id: ""
	I1217 01:35:58.706148 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:58.706234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.710732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.714631 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:58.714747 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:58.742956 1225677 cri.go:89] found id: ""
	I1217 01:35:58.742982 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.742991 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:58.742997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:58.743058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:58.774022 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:58.774044 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:58.774050 1225677 cri.go:89] found id: ""
	I1217 01:35:58.774058 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:58.774112 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.778073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.781607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:58.781686 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:58.808679 1225677 cri.go:89] found id: ""
	I1217 01:35:58.808703 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.808719 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:58.808725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:58.808785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:58.835922 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:58.835942 1225677 cri.go:89] found id: ""
	I1217 01:35:58.835951 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:58.836007 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.839615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:58.839689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:58.866788 1225677 cri.go:89] found id: ""
	I1217 01:35:58.866813 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.866823 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:58.866833 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:58.866866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:58.968702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:58.968738 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:58.989939 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:58.989967 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:59.058020 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:59.058046 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:59.058059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:59.088364 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:59.088394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:59.141100 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:59.141135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:59.232851 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:59.232891 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:59.262771 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:59.262800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:59.290187 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:59.290224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:59.339890 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:59.339924 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:59.422198 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:59.422236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:01.956538 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:01.967590 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:01.967660 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:02.007538 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.007575 1225677 cri.go:89] found id: ""
	I1217 01:36:02.007584 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:02.007670 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.012001 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:02.012136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:02.046710 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.046735 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.046741 1225677 cri.go:89] found id: ""
	I1217 01:36:02.046749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:02.046804 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.050667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.054450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:02.054546 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:02.081851 1225677 cri.go:89] found id: ""
	I1217 01:36:02.081880 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.081890 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:02.081897 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:02.081980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:02.112077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.112101 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.112106 1225677 cri.go:89] found id: ""
	I1217 01:36:02.112114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:02.112169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.116263 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.121396 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:02.121492 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:02.152376 1225677 cri.go:89] found id: ""
	I1217 01:36:02.152404 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.152497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:02.152523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:02.152642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:02.187133 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.187159 1225677 cri.go:89] found id: ""
	I1217 01:36:02.187168 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:02.187247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.191078 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:02.191173 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:02.220566 1225677 cri.go:89] found id: ""
	I1217 01:36:02.220593 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.220602 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:02.220611 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:02.220659 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.253992 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:02.254021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.304043 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:02.304077 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.350981 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:02.351020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.431358 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:02.431393 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.458269 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:02.458298 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:02.561780 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:02.561820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:02.582487 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:02.582522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:02.663558 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:02.663583 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:02.663596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.700536 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:02.700568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:02.775505 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:02.775547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.310734 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:05.322909 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:05.322985 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:05.350653 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.350738 1225677 cri.go:89] found id: ""
	I1217 01:36:05.350762 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:05.350819 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.355346 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:05.355461 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:05.385411 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:05.385439 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.385445 1225677 cri.go:89] found id: ""
	I1217 01:36:05.385453 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:05.385511 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.389761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.393387 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:05.393463 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:05.420412 1225677 cri.go:89] found id: ""
	I1217 01:36:05.420495 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.420505 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:05.420511 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:05.420569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:05.452034 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:05.452060 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.452066 1225677 cri.go:89] found id: ""
	I1217 01:36:05.452075 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:05.452131 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.456205 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.460128 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:05.460221 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:05.486956 1225677 cri.go:89] found id: ""
	I1217 01:36:05.486986 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.486995 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:05.487002 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:05.487063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:05.518138 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.518160 1225677 cri.go:89] found id: ""
	I1217 01:36:05.518169 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:05.518227 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.522038 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:05.522112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:05.552883 1225677 cri.go:89] found id: ""
	I1217 01:36:05.552951 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.552969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:05.552980 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:05.552994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.580975 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:05.581006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:05.677135 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:05.677178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:05.697133 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:05.697163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.725150 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:05.725181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.768358 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:05.768396 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.794846 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:05.794876 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:05.871841 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:05.871921 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.905951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:05.905982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:05.976460 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:05.976482 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:05.976495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:06.030179 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:06.030260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.614353 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:08.625446 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:08.625527 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:08.652272 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.652300 1225677 cri.go:89] found id: ""
	I1217 01:36:08.652309 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:08.652372 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.656164 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:08.656237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:08.682167 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.682186 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:08.682190 1225677 cri.go:89] found id: ""
	I1217 01:36:08.682198 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:08.682258 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.686632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.690338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:08.690409 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:08.717708 1225677 cri.go:89] found id: ""
	I1217 01:36:08.717732 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.717741 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:08.717748 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:08.717805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:08.754193 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.754217 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:08.754222 1225677 cri.go:89] found id: ""
	I1217 01:36:08.754229 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:08.754285 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.758295 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.761917 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:08.762011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:08.793723 1225677 cri.go:89] found id: ""
	I1217 01:36:08.793750 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.793761 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:08.793774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:08.793833 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:08.820995 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:08.821018 1225677 cri.go:89] found id: ""
	I1217 01:36:08.821027 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:08.821109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.824969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:08.825043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:08.850861 1225677 cri.go:89] found id: ""
	I1217 01:36:08.850896 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.850906 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:08.850917 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:08.850929 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:08.927540 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:08.927562 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:08.927576 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.953082 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:08.953110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.994744 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:08.994781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:09.027277 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:09.027305 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:09.056339 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:09.056367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:09.129785 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:09.129820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:09.161526 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:09.161607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:09.261869 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:09.261908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:09.282618 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:09.282652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:09.328912 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:09.328949 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:11.909228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:11.920145 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:11.920215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:11.953558 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:11.953581 1225677 cri.go:89] found id: ""
	I1217 01:36:11.953589 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:11.953643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.957221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:11.957293 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:11.984240 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:11.984263 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:11.984268 1225677 cri.go:89] found id: ""
	I1217 01:36:11.984276 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:11.984336 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.987996 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.991849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:11.991924 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:12.022066 1225677 cri.go:89] found id: ""
	I1217 01:36:12.022096 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.022106 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:12.022113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:12.022174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:12.058540 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.058563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.058569 1225677 cri.go:89] found id: ""
	I1217 01:36:12.058577 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:12.058629 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.063379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.067419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:12.067548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:12.095872 1225677 cri.go:89] found id: ""
	I1217 01:36:12.095900 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.095922 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:12.095929 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:12.095998 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:12.134836 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.134910 1225677 cri.go:89] found id: ""
	I1217 01:36:12.134933 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:12.135022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.139454 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:12.139524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:12.178455 1225677 cri.go:89] found id: ""
	I1217 01:36:12.178481 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.178491 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:12.178500 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:12.178538 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.215176 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:12.215204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:12.304978 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:12.305015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:12.342716 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:12.342745 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:12.444908 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:12.444945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:12.463288 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:12.463316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:12.536568 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:12.536589 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:12.536603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:12.576446 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:12.576479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.652969 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:12.653004 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.684862 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:12.684893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:12.713785 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:12.713815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:15.267669 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:15.282407 1225677 out.go:203] 
	W1217 01:36:15.285472 1225677 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 01:36:15.285518 1225677 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 01:36:15.285531 1225677 out.go:285] * Related issues:
	W1217 01:36:15.285545 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 01:36:15.285561 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 01:36:15.288521 1225677 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.00263192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.018401147Z" level=info msg="Created container 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62: kube-system/storage-provisioner/storage-provisioner" id=1949dc31-1f1c-4b50-a2e1-37b3fdbf1dae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.019096564Z" level=info msg="Starting container: 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62" id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.02762405Z" level=info msg="Started container" PID=1465 containerID=69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62 description=kube-system/storage-provisioner/storage-provisioner id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer sandboxID=201ec2eb9e7bac96947c26eb05eaeb60a6c9cb562fc7abd5b112bcffc3034df6
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.942366958Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946089951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.9461257Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946150479Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949691184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.94972877Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949750136Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953024484Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953060389Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953083707Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956843738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956882473Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.984628463Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=d06134a9-f254-4735-8afd-66ee773b0add name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.986619446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=64030ed7-d453-4dae-a62d-31943ce0a699 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988074458Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988182542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.010661643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.011529823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.034308469Z" level=info msg="Created container bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.036802709Z" level=info msg="Starting container: bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee" id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.042056225Z" level=info msg="Started container" PID=1514 containerID=bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee description=kube-system/kube-controller-manager-ha-202151/kube-controller-manager id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5021c181f938b38114a133bf254586f8ff5e1e22eea40c87bb44019760307250
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	bbbccca1f1945       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Running             kube-controller-manager   7                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	69c29e5195bd5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Running             storage-provisioner       7                   201ec2eb9e7ba       storage-provisioner                 kube-system
	3345ee69cef2f       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   7 minutes ago       Exited              kube-controller-manager   6                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	e2674511b7c44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       6                   201ec2eb9e7ba       storage-provisioner                 kube-system
	5b41f976d94aa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   7991c76c60a45       coredns-66bc5c9577-km6lq            kube-system
	f78b81e996c76       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   9 minutes ago       Running             busybox                   2                   b40c6af808cd2       busybox-7b57f96db7-hw4rm            default
	4f3ffacfcf52c       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   9 minutes ago       Running             kube-proxy                2                   db6cac339dafd       kube-proxy-5gdc5                    kube-system
	cc242e356e74c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   416ecd7d82605       coredns-66bc5c9577-4s6qf            kube-system
	421b902e0a04a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 minutes ago       Running             kindnet-cni               2                   0059b57d997fb       kindnet-7b5wx                       kube-system
	9deff052e5328       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   9 minutes ago       Running             etcd                      2                   cdd6d86a58561       etcd-ha-202151                      kube-system
	b08781420f13d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   9 minutes ago       Running             kube-apiserver            3                   55c73e3aeca0b       kube-apiserver-ha-202151            kube-system
	d2d094f7ce12d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   9 minutes ago       Running             kube-scheduler            2                   9fa81adaf2298       kube-scheduler-ha-202151            kube-system
	f70584959dd02       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Running             kube-vip                  2                   5cb308ab59abd       kube-vip-ha-202151                  kube-system
	
	
	==> coredns [5b41f976d94aab2a66d015407415d4106cf8778628764f4904a5062779241af6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cc242e356e74c1c82ae80013999351dff6fb19a83d4a91a90cd125e034418779] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-202151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T01_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:37:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-202151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                7edb1e1f-1b17-415f-9229-48ba3527eefe
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hw4rm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-66bc5c9577-4s6qf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     24m
	  kube-system                 coredns-66bc5c9577-km6lq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     24m
	  kube-system                 etcd-ha-202151                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         24m
	  kube-system                 kindnet-7b5wx                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24m
	  kube-system                 kube-apiserver-ha-202151             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-202151    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-5gdc5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-202151             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-202151                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m8s                   kube-proxy       
	  Normal   Starting                 24m                    kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    25m (x8 over 25m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  25m (x8 over 25m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     25m (x8 over 25m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     24m                    kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   Starting                 24m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 24m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  24m                    kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m                    kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           24m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           24m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeReady                24m                    kubelet          Node ha-202151 status is now: NodeReady
	  Normal   RegisteredNode           22m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   Starting                 9m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m20s (x8 over 9m21s)  kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m20s (x8 over 9m21s)  kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m20s (x8 over 9m21s)  kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m38s                  node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	
	
	Name:               ha-202151-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_13_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:13:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-202151-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                04eb29d0-5ea5-46d1-ae46-afe3ee374602
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rz794                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-202151-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         24m
	  kube-system                 kindnet-nt6qx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24m
	  kube-system                 kube-apiserver-ha-202151-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-202151-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-hp525                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-202151-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-202151-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 24m                kube-proxy       
	  Normal   RegisteredNode           24m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           24m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           22m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             19m                node-controller  Node ha-202151-m02 status is now: NodeNotReady
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-202151-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           6m38s              node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             5m48s              node-controller  Node ha-202151-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           49s                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	
	
	Name:               ha-202151-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:16:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-202151-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                84c842f9-c3a2-4245-b176-e32c4cbe3e2c
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2d7p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kindnet-cntp7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-proxy-kqgdw            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 21m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    21m (x3 over 21m)  kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     21m                cidrAllocator    Node ha-202151-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeHasSufficientPID     21m (x3 over 21m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m (x3 over 21m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeReady                20m                kubelet          Node ha-202151-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           6m38s              node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeNotReady             5m48s              node-controller  Node ha-202151-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           49s                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	
	
	Name:               ha-202151-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_37_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:37:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:37:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-202151-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                d903183d-46dc-44c6-9b30-b71d4e86967d
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-202151-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-rcbrp                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      48s
	  kube-system                 kube-apiserver-ha-202151-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-ha-202151-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-52s97                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-ha-202151-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-vip-ha-202151-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        43s   kube-proxy       
	  Normal  RegisteredNode  48s   node-controller  Node ha-202151-m05 event: Registered Node ha-202151-m05 in Controller
	  Normal  RegisteredNode  44s   node-controller  Node ha-202151-m05 event: Registered Node ha-202151-m05 in Controller
	
	
	==> dmesg <==
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	[Dec17 01:12] overlayfs: idmapped layers are currently not supported
	[Dec17 01:13] overlayfs: idmapped layers are currently not supported
	[Dec17 01:14] overlayfs: idmapped layers are currently not supported
	[Dec17 01:16] overlayfs: idmapped layers are currently not supported
	[Dec17 01:17] overlayfs: idmapped layers are currently not supported
	[Dec17 01:19] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:26] overlayfs: idmapped layers are currently not supported
	[  +3.428919] overlayfs: idmapped layers are currently not supported
	[ +34.914517] overlayfs: idmapped layers are currently not supported
	[Dec17 01:27] overlayfs: idmapped layers are currently not supported
	[Dec17 01:28] overlayfs: idmapped layers are currently not supported
	[  +3.208371] overlayfs: idmapped layers are currently not supported
	[Dec17 01:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c] <==
	{"level":"info","ts":"2025-12-17T01:36:49.760205Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:49.802740Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-17T01:36:49.802848Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"error","ts":"2025-12-17T01:36:49.817323Z","caller":"etcdserver/server.go:1601","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1601\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1542\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1514\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1466\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.(*ClusterServer).MemberPromote\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/member.go:101\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler.func1\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7432\ngo.etcd.io/etcd/server/v3/etcdserv
er/api/v3rpc.Server.(*ServerMetrics).UnaryServerInterceptor.UnaryServerInterceptor.func12\n\tgithub.com/grpc-ecosystem/go-grpc-middleware/v2@v2.1.0/interceptors/server.go:22\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newUnaryInterceptor.func5\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:74\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newLogUnaryInterceptor.func4\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:81\ngoogle.golang.org/grpc.NewServer.chainUnaryServerInterceptors.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1208\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7434\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoo
gle.golang.org/grpc@v1.71.1/server.go:1405\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1815\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1035"}
	{"level":"info","ts":"2025-12-17T01:36:49.962251Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":4980736,"size":"5.0 MB"}
	{"level":"info","ts":"2025-12-17T01:36:50.216154Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":5111,"remote-peer-id":"2e3bda924e1ae8ff","bytes":4990308,"size":"5.0 MB"}
	{"level":"info","ts":"2025-12-17T01:36:50.331837Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(3331496671281080575 4778087298962311874 12593026477526642892)"}
	{"level":"info","ts":"2025-12-17T01:36:50.332000Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.332048Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"2e3bda924e1ae8ff"}
	{"level":"warn","ts":"2025-12-17T01:36:50.491718Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:36:50.492625Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:36:50.793120Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2e3bda924e1ae8ff","error":"failed to write 2e3bda924e1ae8ff on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:44814: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-17T01:36:50.793284Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.910244Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.910296Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.997714Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"warn","ts":"2025-12-17T01:36:51.112334Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2e3bda924e1ae8ff","error":"failed to write 2e3bda924e1ae8ff on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:44800: write: connection reset by peer)"}
	{"level":"warn","ts":"2025-12-17T01:36:51.112463Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:51.165824Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:51.166420Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-17T01:36:51.166484Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:51.180136Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-17T01:36:51.180242Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:37:03.787083Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-17T01:37:20.217246Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","bytes":4990308,"size":"5.0 MB","took":"30.583062521s"}
	
	
	==> kernel <==
	 01:37:51 up  7:20,  0 user,  load average: 1.26, 1.43, 1.54
	Linux ha-202151 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [421b902e0a04a8b9de33dba40eff9de2915e948b549831a023a55f14ab43a351] <==
	I1217 01:37:11.942332       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.6 Flags: [] Table: 0 Realm: 0} 
	I1217 01:37:21.944510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:37:21.944544       1 main.go:301] handling current node
	I1217 01:37:21.944560       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:37:21.944566       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:37:21.944706       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:37:21.944720       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:37:21.944777       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1217 01:37:21.944788       1 main.go:324] Node ha-202151-m05 has CIDR [10.244.2.0/24] 
	I1217 01:37:31.943099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:37:31.943138       1 main.go:301] handling current node
	I1217 01:37:31.943156       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:37:31.943162       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:37:31.943311       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:37:31.943326       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:37:31.943382       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1217 01:37:31.943387       1 main.go:324] Node ha-202151-m05 has CIDR [10.244.2.0/24] 
	I1217 01:37:41.945351       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:37:41.945459       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:37:41.945625       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:37:41.945666       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:37:41.945764       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1217 01:37:41.945816       1 main.go:324] Node ha-202151-m05 has CIDR [10.244.2.0/24] 
	I1217 01:37:41.945910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:37:41.946016       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43] <==
	{"level":"warn","ts":"2025-12-17T01:30:14.097955Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a885a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098017Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002e254a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098226Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098431Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c61680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098550Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a21e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098649Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002813860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098715Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021443c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100260Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002913860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100450Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002114960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001752b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002912d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.101157Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a9c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.108687Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1217 01:30:14.109232       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1217 01:30:14.109341       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111281       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111377       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.112738       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.651626ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	{"level":"warn","ts":"2025-12-17T01:30:14.178037Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000eec000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I1217 01:30:20.949098       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1217 01:30:43.911399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1217 01:31:13.533495       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 01:32:03.642642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 01:32:03.692026       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317] <==
	I1217 01:30:11.991091       1 serving.go:386] Generated self-signed cert in-memory
	I1217 01:30:13.217832       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1217 01:30:13.217864       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:13.219443       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 01:30:13.219569       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 01:30:13.220274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 01:30:13.220329       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 01:30:24.189762       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee] <==
	E1217 01:31:53.506340       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506373       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506405       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506437       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	I1217 01:31:53.524733       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.571989       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.572097       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.606958       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.607067       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646154       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646268       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695195       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695310       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742527       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742634       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785957       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785994       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:31:53.833471       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:32:03.448660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-202151-m04"
	E1217 01:37:02.791748       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-99fnt failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-99fnt\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1217 01:37:03.587675       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-202151-m05\" does not exist"
	I1217 01:37:03.614684       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-202151-m05" podCIDRs=["10.244.2.0/24"]
	I1217 01:37:03.753344       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-202151-m05"
	I1217 01:37:03.753819       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1217 01:37:48.762452       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [4f3ffacfcf52c27d4a48be1c9762e97d9c8b2f9eff204b9108c451da8b2defab] <==
	E1217 01:28:51.112803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:58.124554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:10.248785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:26.153294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:30:07.912871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1217 01:30:42.899769       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 01:30:42.899808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 01:30:42.899895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 01:30:42.921440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 01:30:42.921510       1 server_linux.go:132] "Using iptables Proxier"
	I1217 01:30:42.927648       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 01:30:42.928009       1 server.go:527] "Version info" version="v1.34.2"
	I1217 01:30:42.928034       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:42.931509       1 config.go:106] "Starting endpoint slice config controller"
	I1217 01:30:42.931589       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 01:30:42.931909       1 config.go:200] "Starting service config controller"
	I1217 01:30:42.931953       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 01:30:42.932968       1 config.go:309] "Starting node config controller"
	I1217 01:30:42.932995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 01:30:42.933003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 01:30:42.933332       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 01:30:42.933352       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 01:30:43.031859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 01:30:43.032046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 01:30:43.033393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e] <==
	E1217 01:28:38.924937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:38.925147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:38.925091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:38.925212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:38.925293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:39.827962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:39.828496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 01:28:39.945026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:39.947443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 01:28:40.059965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 01:28:40.060779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:40.088703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:40.109776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 01:28:40.129468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:40.134968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 01:28:40.195130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 01:28:40.254624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 01:28:40.281191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 01:28:40.314175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 01:28:40.347761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 01:28:40.381360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 01:28:40.463231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 01:28:40.490812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 01:28:40.517370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1217 01:28:41.991837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 01:29:56 ha-202151 kubelet[802]: I1217 01:29:56.984304     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:29:56 ha-202151 kubelet[802]: E1217 01:29:56.984531     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:01 ha-202151 kubelet[802]: E1217 01:30:01.439578     802 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-202151)" interval="400ms"
	Dec 17 01:30:02 ha-202151 kubelet[802]: E1217 01:30:02.001281     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:10 ha-202151 kubelet[802]: I1217 01:30:10.983522     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:11 ha-202151 kubelet[802]: E1217 01:30:11.841503     802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	Dec 17 01:30:12 ha-202151 kubelet[802]: E1217 01:30:12.002934     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.438401     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.439109     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:24 ha-202151 kubelet[802]: E1217 01:30:24.439355     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.449813     802 scope.go:117] "RemoveContainer" containerID="61c769055e2e33178655adbc6de856c58722cb4c70738c4d94a535d730bf75c6"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.450264     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:27 ha-202151 kubelet[802]: E1217 01:30:27.450420     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:29 ha-202151 kubelet[802]: I1217 01:30:29.966353     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:29 ha-202151 kubelet[802]: E1217 01:30:29.966538     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:34 ha-202151 kubelet[802]: I1217 01:30:34.175661     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:34 ha-202151 kubelet[802]: E1217 01:30:34.175845     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:38 ha-202151 kubelet[802]: I1217 01:30:38.984627     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:38 ha-202151 kubelet[802]: E1217 01:30:38.985748     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:47 ha-202151 kubelet[802]: I1217 01:30:47.984399     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:47 ha-202151 kubelet[802]: E1217 01:30:47.984633     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:52 ha-202151 kubelet[802]: I1217 01:30:52.985253     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:58 ha-202151 kubelet[802]: I1217 01:30:58.984851     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:58 ha-202151 kubelet[802]: E1217 01:30:58.985050     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:31:09 ha-202151 kubelet[802]: I1217 01:31:09.983912     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-202151 -n ha-202151
helpers_test.go:270: (dbg) Run:  kubectl --context ha-202151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (86.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (6.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:309: expected profile "ha-202151" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-202151\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-202151\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-202151\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-devi
ce-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":
false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-202151
helpers_test.go:244: (dbg) docker inspect ha-202151:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	        "Created": "2025-12-17T01:12:34.697109094Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1225803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:28:24.223784082Z",
	            "FinishedAt": "2025-12-17T01:28:23.510213695Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/hosts",
	        "LogPath": "/var/lib/docker/containers/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d-json.log",
	        "Name": "/ha-202151",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-202151:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-202151",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d",
	                "LowerDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20fdd04f77ae6d0cda04c7d3506dd388a13425b8efac37a10bd70148a936d871/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-202151",
	                "Source": "/var/lib/docker/volumes/ha-202151/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-202151",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-202151",
	                "name.minikube.sigs.k8s.io": "ha-202151",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a8bfe290f37deb1c3104d9ab559bda078e71c5706919642a39ad4ea7fcab4f9",
	            "SandboxKey": "/var/run/docker/netns/1a8bfe290f37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33961"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-202151": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:fe:96:8f:04:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e224ccab4890fdef242aee82a08ae93dfe44ddd1860f17db152892136a611dec",
	                    "EndpointID": "d9f94b3340492bc0b924fd0e2620aaaaec200a88061066241297f013a7336f77",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-202151",
	                        "0d1af93acb20"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-202151 -n ha-202151
helpers_test.go:253: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 logs -n 25: (2.378742267s)
helpers_test.go:261: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp testdata/cp-test.txt ha-202151-m04:/home/docker/cp-test.txt                                                             │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m04.txt │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m04_ha-202151.txt                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151.txt                                                 │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m02 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ cp      │ ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt               │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ ssh     │ ha-202151 ssh -n ha-202151-m03 sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:17 UTC │
	│ node    │ ha-202151 node start m02 --alsologtostderr -v 5                                                                                      │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │                     │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │                     │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:25 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5                                                                                   │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:25 UTC │ 17 Dec 25 01:27 UTC │
	│ node    │ ha-202151 node list --alsologtostderr -v 5                                                                                           │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │                     │
	│ node    │ ha-202151 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:27 UTC │
	│ stop    │ ha-202151 stop --alsologtostderr -v 5                                                                                                │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:27 UTC │ 17 Dec 25 01:28 UTC │
	│ start   │ ha-202151 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:28 UTC │                     │
	│ node    │ ha-202151 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-202151 │ jenkins │ v1.37.0 │ 17 Dec 25 01:36 UTC │ 17 Dec 25 01:37 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:28:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:28:23.957919 1225677 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:28:23.958241 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958276 1225677 out.go:374] Setting ErrFile to fd 2...
	I1217 01:28:23.958300 1225677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.958577 1225677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:28:23.958999 1225677 out.go:368] Setting JSON to false
	I1217 01:28:23.959883 1225677 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25854,"bootTime":1765909050,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:28:23.959981 1225677 start.go:143] virtualization:  
	I1217 01:28:23.963109 1225677 out.go:179] * [ha-202151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:28:23.966861 1225677 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:28:23.967008 1225677 notify.go:221] Checking for updates...
	I1217 01:28:23.972825 1225677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:28:23.975704 1225677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:23.978560 1225677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:28:23.981565 1225677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:28:23.984558 1225677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:28:23.987973 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:23.988577 1225677 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:28:24.018679 1225677 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:28:24.018817 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.078613 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.06901697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.078731 1225677 docker.go:319] overlay module found
	I1217 01:28:24.081724 1225677 out.go:179] * Using the docker driver based on existing profile
	I1217 01:28:24.084659 1225677 start.go:309] selected driver: docker
	I1217 01:28:24.084679 1225677 start.go:927] validating driver "docker" against &{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.084825 1225677 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:28:24.084933 1225677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:28:24.139102 1225677 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-17 01:28:24.130176461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:28:24.139528 1225677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:28:24.139560 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:24.139616 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:24.139662 1225677 start.go:353] cluster config:
	{Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:24.142829 1225677 out.go:179] * Starting "ha-202151" primary control-plane node in "ha-202151" cluster
	I1217 01:28:24.145513 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:24.148343 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:24.151136 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:24.151182 1225677 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 01:28:24.151172 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:24.151191 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:24.151281 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:24.151292 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:24.151447 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.170893 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:24.170917 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:24.170932 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:24.170962 1225677 start.go:360] acquireMachinesLock for ha-202151: {Name:mk96d245790ddb7861f0cddd8ac09eba6d29a858 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:24.171020 1225677 start.go:364] duration metric: took 36.119µs to acquireMachinesLock for "ha-202151"
	I1217 01:28:24.171043 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:24.171052 1225677 fix.go:54] fixHost starting: 
	I1217 01:28:24.171312 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.188404 1225677 fix.go:112] recreateIfNeeded on ha-202151: state=Stopped err=<nil>
	W1217 01:28:24.188458 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:24.191811 1225677 out.go:252] * Restarting existing docker container for "ha-202151" ...
	I1217 01:28:24.191909 1225677 cli_runner.go:164] Run: docker start ha-202151
	I1217 01:28:24.438707 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:24.459881 1225677 kic.go:430] container "ha-202151" state is running.
	I1217 01:28:24.460741 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:24.487033 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:24.487599 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:24.487676 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:24.511372 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:24.513726 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:24.513748 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:24.516008 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:28:27.648958 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.648981 1225677 ubuntu.go:182] provisioning hostname "ha-202151"
	I1217 01:28:27.649043 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.671053 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.671376 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.671387 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151 && echo "ha-202151" | sudo tee /etc/hostname
	I1217 01:28:27.816001 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151
	
	I1217 01:28:27.816128 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:27.833557 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:27.833865 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:27.833885 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:27.968607 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:27.968638 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:27.968669 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:27.968686 1225677 provision.go:84] configureAuth start
	I1217 01:28:27.968751 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:27.986183 1225677 provision.go:143] copyHostCerts
	I1217 01:28:27.986244 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986288 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:27.986301 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:27.986379 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:27.986471 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986493 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:27.986502 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:27.986530 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:27.986576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986601 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:27.986609 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:27.986637 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:27.986687 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151 san=[127.0.0.1 192.168.49.2 ha-202151 localhost minikube]
	I1217 01:28:28.161966 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:28.162074 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:28.162136 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.180162 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.276314 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:28.276374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:28.294399 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:28.294463 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1217 01:28:28.312546 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:28.312611 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:28.329872 1225677 provision.go:87] duration metric: took 361.168151ms to configureAuth
	I1217 01:28:28.329900 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:28.330141 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:28.330260 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.347687 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:28.348017 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I1217 01:28:28.348037 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:28.719002 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:28.719025 1225677 machine.go:97] duration metric: took 4.231409969s to provisionDockerMachine
	I1217 01:28:28.719036 1225677 start.go:293] postStartSetup for "ha-202151" (driver="docker")
	I1217 01:28:28.719047 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:28.719106 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:28.719158 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.741197 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.836254 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:28.839569 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:28.839599 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:28.839611 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:28.839667 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:28.839747 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:28.839758 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:28.839856 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:28.847310 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:28.864518 1225677 start.go:296] duration metric: took 145.466453ms for postStartSetup
	I1217 01:28:28.864667 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:28.864709 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:28.882572 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:28.974073 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:28.979262 1225677 fix.go:56] duration metric: took 4.808204011s for fixHost
	I1217 01:28:28.979289 1225677 start.go:83] releasing machines lock for "ha-202151", held for 4.808256014s
	I1217 01:28:28.979366 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:28:29.000545 1225677 ssh_runner.go:195] Run: cat /version.json
	I1217 01:28:29.000593 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:29.000605 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.000678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:28:29.017863 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.030045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:28:29.205586 1225677 ssh_runner.go:195] Run: systemctl --version
	I1217 01:28:29.212211 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:29.247878 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:29.252247 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:29.252372 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:29.260987 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:29.261012 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:29.261044 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:29.261091 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:29.276500 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:29.289977 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:29.290113 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:29.306150 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:29.319359 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:29.442260 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:29.554130 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:29.554229 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:29.569409 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:29.582225 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:29.693269 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:29.815821 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:29.829762 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:29.843587 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:29.843675 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.852929 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:29.853026 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.862094 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.870988 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.879860 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:29.888714 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.897427 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.906242 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:29.915392 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:29.923247 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:29.930867 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.085763 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:28:30.268466 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:28:30.268540 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:28:30.272645 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:28:30.272717 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:28:30.276359 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:28:30.302094 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:28:30.302194 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.329875 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:28:30.364988 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:28:30.367851 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:28:30.383155 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:28:30.387105 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.397488 1225677 kubeadm.go:884] updating cluster {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:28:30.397642 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:30.397701 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.434465 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.434490 1225677 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:28:30.434546 1225677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:28:30.461597 1225677 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:28:30.461622 1225677 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:28:30.461631 1225677 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1217 01:28:30.461733 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:28:30.461815 1225677 ssh_runner.go:195] Run: crio config
	I1217 01:28:30.524993 1225677 cni.go:84] Creating CNI manager for ""
	I1217 01:28:30.525016 1225677 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1217 01:28:30.525041 1225677 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:28:30.525063 1225677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-202151 NodeName:ha-202151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:28:30.525197 1225677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-202151"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:28:30.525219 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:28:30.525269 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:28:30.537247 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:30.537359 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:28:30.537423 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:28:30.545256 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:28:30.545330 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1217 01:28:30.553189 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1217 01:28:30.566160 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:28:30.579061 1225677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1217 01:28:30.591667 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:28:30.604079 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:28:30.607859 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:28:30.617660 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:30.737827 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:28:30.755642 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.2
	I1217 01:28:30.755663 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:28:30.755694 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:30.755839 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:28:30.755906 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:28:30.755919 1225677 certs.go:257] generating profile certs ...
	I1217 01:28:30.755998 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:28:30.756031 1225677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698
	I1217 01:28:30.756050 1225677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1217 01:28:31.070955 1225677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 ...
	I1217 01:28:31.071062 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698: {Name:mke1b333e19e123d757f2361ffab64b3ce630ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071323 1225677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 ...
	I1217 01:28:31.071369 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698: {Name:mk12d8ef8dbb1ef8ff84c5ba8c83b430a9515230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:31.071553 1225677 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt
	I1217 01:28:31.071777 1225677 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.91228698 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key
	I1217 01:28:31.071982 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:28:31.072020 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:28:31.072053 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:28:31.072099 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:28:31.072142 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:28:31.072179 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:28:31.072222 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:28:31.072260 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:28:31.072291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:28:31.072379 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:28:31.072496 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:28:31.072540 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:28:31.072623 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:28:31.072699 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:28:31.072755 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:28:31.072888 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:31.072995 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.073038 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.073074 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.073717 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:28:31.098054 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:28:31.121354 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:28:31.140746 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:28:31.159713 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:28:31.178284 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:28:31.196338 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:28:31.214382 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:28:31.231910 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:28:31.249283 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:28:31.267150 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:28:31.284464 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:28:31.297370 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:28:31.303511 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.310796 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:28:31.318435 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322279 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.322380 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:28:31.363578 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:28:31.371139 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.378596 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:28:31.385983 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389802 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.389911 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:28:31.449546 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:28:31.463605 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.474127 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:28:31.484475 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489596 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.489713 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:28:31.551435 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:28:31.559450 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:28:31.573170 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:28:31.639157 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:28:31.715122 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:28:31.783477 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:28:31.844822 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:28:31.905215 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:28:31.967945 1225677 kubeadm.go:401] StartCluster: {Name:ha-202151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:28:31.968163 1225677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:28:31.968241 1225677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:28:32.018626 1225677 cri.go:89] found id: "9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c"
	I1217 01:28:32.018691 1225677 cri.go:89] found id: "b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43"
	I1217 01:28:32.018711 1225677 cri.go:89] found id: "d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e"
	I1217 01:28:32.018735 1225677 cri.go:89] found id: "f70584959dd02aedc5247d28de369b3dfbec762797364a5b46746119bcd380ba"
	I1217 01:28:32.018753 1225677 cri.go:89] found id: "82cc4882889dc4d930d89f36ac77114d0161f4172216bc47431b8697c0630be5"
	I1217 01:28:32.018781 1225677 cri.go:89] found id: ""
	I1217 01:28:32.018853 1225677 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 01:28:32.044061 1225677 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T01:28:32Z" level=error msg="open /run/runc: no such file or directory"
	I1217 01:28:32.044185 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:28:32.052950 1225677 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:28:32.053010 1225677 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:28:32.053080 1225677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:28:32.061188 1225677 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:28:32.061654 1225677 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-202151" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.061797 1225677 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-202151" cluster setting kubeconfig missing "ha-202151" context setting]
	I1217 01:28:32.062106 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.062698 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:28:32.063465 1225677 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:28:32.063546 1225677 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:28:32.063583 1225677 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:28:32.063613 1225677 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:28:32.063651 1225677 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:28:32.063976 1225677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:28:32.063525 1225677 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 01:28:32.081817 1225677 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1217 01:28:32.081837 1225677 kubeadm.go:602] duration metric: took 28.80443ms to restartPrimaryControlPlane
	I1217 01:28:32.081846 1225677 kubeadm.go:403] duration metric: took 113.913079ms to StartCluster
	I1217 01:28:32.081861 1225677 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.081919 1225677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:28:32.082486 1225677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:28:32.082669 1225677 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:28:32.082688 1225677 start.go:242] waiting for startup goroutines ...
	I1217 01:28:32.082706 1225677 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:28:32.083152 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.086942 1225677 out.go:179] * Enabled addons: 
	I1217 01:28:32.089944 1225677 addons.go:530] duration metric: took 7.236595ms for enable addons: enabled=[]
	I1217 01:28:32.089983 1225677 start.go:247] waiting for cluster config update ...
	I1217 01:28:32.089992 1225677 start.go:256] writing updated cluster config ...
	I1217 01:28:32.093327 1225677 out.go:203] 
	I1217 01:28:32.096604 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:32.096790 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.100238 1225677 out.go:179] * Starting "ha-202151-m02" control-plane node in "ha-202151" cluster
	I1217 01:28:32.103257 1225677 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:28:32.106243 1225677 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:28:32.109227 1225677 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:28:32.109291 1225677 cache.go:65] Caching tarball of preloaded images
	I1217 01:28:32.109420 1225677 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:28:32.109454 1225677 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:28:32.109592 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.109854 1225677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:28:32.139073 1225677 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:28:32.139092 1225677 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:28:32.139106 1225677 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:28:32.139130 1225677 start.go:360] acquireMachinesLock for ha-202151-m02: {Name:mke470c952ef21b52766346e32bdb3f1cf613f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:28:32.139181 1225677 start.go:364] duration metric: took 36.692µs to acquireMachinesLock for "ha-202151-m02"
	I1217 01:28:32.139199 1225677 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:28:32.139204 1225677 fix.go:54] fixHost starting: m02
	I1217 01:28:32.139463 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.170663 1225677 fix.go:112] recreateIfNeeded on ha-202151-m02: state=Stopped err=<nil>
	W1217 01:28:32.170689 1225677 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:28:32.173829 1225677 out.go:252] * Restarting existing docker container for "ha-202151-m02" ...
	I1217 01:28:32.173910 1225677 cli_runner.go:164] Run: docker start ha-202151-m02
	I1217 01:28:32.543486 1225677 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:32.572710 1225677 kic.go:430] container "ha-202151-m02" state is running.
	I1217 01:28:32.573066 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:32.602951 1225677 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/config.json ...
	I1217 01:28:32.603208 1225677 machine.go:94] provisionDockerMachine start ...
	I1217 01:28:32.603266 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:32.629641 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:32.629950 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:32.629959 1225677 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:28:32.630596 1225677 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37710->127.0.0.1:33963: read: connection reset by peer
	I1217 01:28:35.808896 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:35.808924 1225677 ubuntu.go:182] provisioning hostname "ha-202151-m02"
	I1217 01:28:35.808996 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:35.842137 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:35.842447 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:35.842466 1225677 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-202151-m02 && echo "ha-202151-m02" | sudo tee /etc/hostname
	I1217 01:28:36.038050 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-202151-m02
	
	I1217 01:28:36.038178 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.082250 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:36.082569 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:36.082593 1225677 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-202151-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-202151-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-202151-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:28:36.332805 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:28:36.332901 1225677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:28:36.332944 1225677 ubuntu.go:190] setting up certificates
	I1217 01:28:36.332991 1225677 provision.go:84] configureAuth start
	I1217 01:28:36.333104 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:36.366101 1225677 provision.go:143] copyHostCerts
	I1217 01:28:36.366154 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366188 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:28:36.366198 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:28:36.366291 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:28:36.366454 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366479 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:28:36.366484 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:28:36.366514 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:28:36.366576 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366600 1225677 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:28:36.366604 1225677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:28:36.366636 1225677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:28:36.366685 1225677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.ha-202151-m02 san=[127.0.0.1 192.168.49.3 ha-202151-m02 localhost minikube]
	I1217 01:28:36.714448 1225677 provision.go:177] copyRemoteCerts
	I1217 01:28:36.714609 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:28:36.714700 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:36.737234 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:36.864039 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 01:28:36.864124 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:28:36.913291 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 01:28:36.913360 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 01:28:36.977060 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 01:28:36.977210 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:28:37.077043 1225677 provision.go:87] duration metric: took 744.017822ms to configureAuth
	I1217 01:28:37.077119 1225677 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:28:37.077458 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:37.077641 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:37.114203 1225677 main.go:143] libmachine: Using SSH client type: native
	I1217 01:28:37.114614 1225677 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I1217 01:28:37.114630 1225677 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:28:38.749167 1225677 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:28:38.749190 1225677 machine.go:97] duration metric: took 6.145972988s to provisionDockerMachine
	I1217 01:28:38.749202 1225677 start.go:293] postStartSetup for "ha-202151-m02" (driver="docker")
	I1217 01:28:38.749218 1225677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:28:38.749280 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:28:38.749320 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.798164 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:38.934750 1225677 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:28:38.938751 1225677 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:28:38.938784 1225677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:28:38.938805 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:28:38.938890 1225677 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:28:38.939022 1225677 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:28:38.939035 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 01:28:38.939161 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:28:38.949374 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:28:38.977662 1225677 start.go:296] duration metric: took 228.444359ms for postStartSetup
	I1217 01:28:38.977768 1225677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:28:38.977833 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:38.997045 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.094589 1225677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:28:39.100157 1225677 fix.go:56] duration metric: took 6.9609442s for fixHost
	I1217 01:28:39.100185 1225677 start.go:83] releasing machines lock for "ha-202151-m02", held for 6.960996095s
	I1217 01:28:39.100277 1225677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m02
	I1217 01:28:39.121509 1225677 out.go:179] * Found network options:
	I1217 01:28:39.124537 1225677 out.go:179]   - NO_PROXY=192.168.49.2
	W1217 01:28:39.127500 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	W1217 01:28:39.127546 1225677 proxy.go:120] fail to check proxy env: Error ip not in block
	I1217 01:28:39.127633 1225677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:28:39.127678 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.127731 1225677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:28:39.127813 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m02
	I1217 01:28:39.159911 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.160356 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m02/id_rsa Username:docker}
	I1217 01:28:39.389362 1225677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:28:39.518196 1225677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:28:39.518280 1225677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:28:39.530690 1225677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:28:39.530730 1225677 start.go:496] detecting cgroup driver to use...
	I1217 01:28:39.530766 1225677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:28:39.530828 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:28:39.559452 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:28:39.590703 1225677 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:28:39.590778 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:28:39.623053 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:28:39.646277 1225677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:28:39.924657 1225677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:28:40.211696 1225677 docker.go:234] disabling docker service ...
	I1217 01:28:40.211818 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:28:40.234789 1225677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:28:40.255311 1225677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:28:40.483522 1225677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:28:40.697787 1225677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:28:40.728627 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:28:40.773025 1225677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:28:40.773101 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.810962 1225677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:28:40.811053 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.830095 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.843899 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.859512 1225677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:28:40.875469 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.891423 1225677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.906705 1225677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:28:40.920139 1225677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:28:40.935324 1225677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:28:40.949872 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:28:41.265195 1225677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:11.765812 1225677 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.500580562s)
	I1217 01:30:11.765836 1225677 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:11.765895 1225677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:11.773685 1225677 start.go:564] Will wait 60s for crictl version
	I1217 01:30:11.773748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:30:11.777914 1225677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:30:11.832219 1225677 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:30:11.832561 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.883307 1225677 ssh_runner.go:195] Run: crio --version
	I1217 01:30:11.931713 1225677 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 01:30:11.934749 1225677 out.go:179]   - env NO_PROXY=192.168.49.2
	I1217 01:30:11.937773 1225677 cli_runner.go:164] Run: docker network inspect ha-202151 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:30:11.958180 1225677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:11.963975 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:11.980941 1225677 mustload.go:66] Loading cluster: ha-202151
	I1217 01:30:11.981196 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:11.981523 1225677 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:30:12.010212 1225677 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:30:12.010538 1225677 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151 for IP: 192.168.49.3
	I1217 01:30:12.010547 1225677 certs.go:195] generating shared ca certs ...
	I1217 01:30:12.010562 1225677 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:12.010679 1225677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:30:12.010721 1225677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:30:12.010729 1225677 certs.go:257] generating profile certs ...
	I1217 01:30:12.010806 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key
	I1217 01:30:12.010871 1225677 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key.53e15730
	I1217 01:30:12.010909 1225677 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key
	I1217 01:30:12.010918 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 01:30:12.010930 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 01:30:12.010942 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 01:30:12.010952 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 01:30:12.010963 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 01:30:12.010976 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 01:30:12.010988 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 01:30:12.010998 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 01:30:12.011046 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:30:12.011099 1225677 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:12.011108 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:30:12.011142 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:30:12.011167 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:12.011226 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:30:12.011276 1225677 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:30:12.011308 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.011330 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.011341 1225677 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.011405 1225677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:30:12.040530 1225677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:30:12.140835 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1217 01:30:12.145679 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1217 01:30:12.155103 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1217 01:30:12.158946 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1217 01:30:12.168468 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1217 01:30:12.172730 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1217 01:30:12.182622 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1217 01:30:12.186892 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1217 01:30:12.196428 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1217 01:30:12.200769 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1217 01:30:12.210174 1225677 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1217 01:30:12.214229 1225677 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1217 01:30:12.223408 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:12.242760 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:12.263233 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:12.281118 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:12.299303 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:30:12.317115 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:12.334779 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:12.352592 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:30:12.370481 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:30:12.389095 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:30:12.412594 1225677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:12.449315 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1217 01:30:12.473400 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1217 01:30:12.494693 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1217 01:30:12.517806 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1217 01:30:12.543454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1217 01:30:12.563454 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1217 01:30:12.583785 1225677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1217 01:30:12.603782 1225677 ssh_runner.go:195] Run: openssl version
	I1217 01:30:12.611317 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.622461 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:30:12.631322 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635830 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.635962 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:30:12.683099 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:12.692252 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.701723 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:12.714594 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719579 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.719716 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:12.763558 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:12.772848 1225677 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.782803 1225677 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:30:12.792174 1225677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.797950 1225677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.798068 1225677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:30:12.843461 1225677 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:12.852350 1225677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:12.856738 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:12.902677 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:12.948658 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:12.994789 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:13.042684 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:13.096054 1225677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:13.158401 1225677 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1217 01:30:13.158570 1225677 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-202151-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-202151 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:13.158615 1225677 kube-vip.go:115] generating kube-vip config ...
	I1217 01:30:13.158706 1225677 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1217 01:30:13.173582 1225677 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:30:13.173705 1225677 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1217 01:30:13.173834 1225677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:13.183901 1225677 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:13.184021 1225677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1217 01:30:13.192889 1225677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 01:30:13.208806 1225677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:13.224983 1225677 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1217 01:30:13.240987 1225677 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:13.245030 1225677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:13.255387 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.401843 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.417093 1225677 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:13.416720 1225677 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:30:13.423303 1225677 out.go:179] * Verifying Kubernetes components...
	I1217 01:30:13.426149 1225677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:13.647974 1225677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:13.667990 1225677 kapi.go:59] client config for ha-202151: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/ha-202151/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1217 01:30:13.668105 1225677 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1217 01:30:13.668438 1225677 node_ready.go:35] waiting up to 6m0s for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201323 1225677 node_ready.go:49] node "ha-202151-m02" is "Ready"
	I1217 01:30:14.201352 1225677 node_ready.go:38] duration metric: took 532.861298ms for node "ha-202151-m02" to be "Ready" ...
	I1217 01:30:14.201366 1225677 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:30:14.201430 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:14.702397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.202165 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:15.701679 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.202436 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:16.701593 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.202167 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:17.702134 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.201871 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:18.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.202178 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:19.702421 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.201608 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:20.701963 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.201849 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:21.702468 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.201659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:22.702284 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.202447 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:23.701767 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.201870 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:24.701725 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.202161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:25.701566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.201668 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:26.702034 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.202090 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:27.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.201787 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:28.701530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.202044 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:29.702049 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.202554 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.201868 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:31.702179 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.202396 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:32.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:33.702380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:34.701675 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.201765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:35.701936 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.201563 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:36.701569 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.202228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:37.702471 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.201812 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:38.701808 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.201588 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:39.701513 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.202142 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:40.701610 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.201867 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:41.702427 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.202172 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:42.701559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.202404 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:43.701704 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.201454 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:44.702205 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.201850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:45.702118 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.201665 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:46.702497 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.201634 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:47.701590 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.202217 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:48.701586 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.202252 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:49.701540 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.201658 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:50.702332 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.202380 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:51.701545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.202215 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:52.701654 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.202277 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:53.701599 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.202236 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:54.702370 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.201552 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:55.702331 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.201545 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:56.701600 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.202549 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:57.701595 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.202225 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:58.701571 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.202016 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:59.702392 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.212791 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:00.701639 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.202292 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:01.701781 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.201523 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:02.701618 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.201666 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:03.702192 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.202218 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:04.701749 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.201582 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:05.701583 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.201568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:06.702305 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.202030 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:07.702244 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.201601 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:08.702328 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.202314 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:09.701594 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.202413 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:10.701574 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.201566 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:11.702440 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.202160 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:12.701568 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.202474 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:13.701537 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:13.701628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:13.737091 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:13.737114 1225677 cri.go:89] found id: ""
	I1217 01:31:13.737124 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:13.737180 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.741133 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:13.741205 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:13.767828 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:13.767849 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:13.767854 1225677 cri.go:89] found id: ""
	I1217 01:31:13.767861 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:13.767916 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.772125 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.775836 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:13.775913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:13.807345 1225677 cri.go:89] found id: ""
	I1217 01:31:13.807369 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.807377 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:13.807384 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:13.807444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:13.838797 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:13.838817 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:13.838821 1225677 cri.go:89] found id: ""
	I1217 01:31:13.838829 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:13.838887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.843081 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.846896 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:13.846969 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:13.886939 1225677 cri.go:89] found id: ""
	I1217 01:31:13.886968 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.886977 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:13.886983 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:13.887045 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:13.927324 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:13.927350 1225677 cri.go:89] found id: ""
	I1217 01:31:13.927359 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:13.927418 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:13.932191 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:13.932281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:13.963576 1225677 cri.go:89] found id: ""
	I1217 01:31:13.963605 1225677 logs.go:282] 0 containers: []
	W1217 01:31:13.963614 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:13.963623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:13.963636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:14.061267 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:14.061313 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:14.083208 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:14.083318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:14.113297 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:14.113328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:14.168503 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:14.168540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:14.225258 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:14.225299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:14.254658 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:14.254688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:14.329954 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:14.329994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:14.363830 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:14.363859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:14.780185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:14.772400    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.773031    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.774654    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.775150    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:14.776371    1542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:14.780213 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:14.780229 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:14.821746 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:14.821787 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.348276 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:17.359506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:17.359576 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:17.385494 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.385522 1225677 cri.go:89] found id: ""
	I1217 01:31:17.385531 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:17.385587 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.389291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:17.389381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:17.417467 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:17.417488 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:17.417493 1225677 cri.go:89] found id: ""
	I1217 01:31:17.417501 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:17.417557 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.421553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.425305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:17.425381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:17.452893 1225677 cri.go:89] found id: ""
	I1217 01:31:17.452925 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.452935 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:17.452945 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:17.453003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:17.479708 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.479730 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.479736 1225677 cri.go:89] found id: ""
	I1217 01:31:17.479743 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:17.479799 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.484009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.487543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:17.487617 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:17.522723 1225677 cri.go:89] found id: ""
	I1217 01:31:17.522751 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.522760 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:17.522767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:17.522829 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:17.550998 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.551023 1225677 cri.go:89] found id: ""
	I1217 01:31:17.551032 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:17.551086 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:17.554682 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:17.554767 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:17.587610 1225677 cri.go:89] found id: ""
	I1217 01:31:17.587650 1225677 logs.go:282] 0 containers: []
	W1217 01:31:17.587659 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:17.587684 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:17.587709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:17.616971 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:17.617002 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:17.692991 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:17.693034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:17.741052 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:17.741081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:17.761199 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:17.761228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:17.792936 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:17.793007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:17.845716 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:17.845753 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:17.881065 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:17.881096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:17.982043 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:17.982082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:18.070492 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:18.061244    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.061856    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063416    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.063944    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:18.065676    1674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:18.070517 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:18.070531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:18.117818 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:18.117911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.668542 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:20.679148 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:20.679242 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:20.706664 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:20.706687 1225677 cri.go:89] found id: ""
	I1217 01:31:20.706697 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:20.706757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.711072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:20.711147 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:20.737754 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:20.737779 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:20.737784 1225677 cri.go:89] found id: ""
	I1217 01:31:20.737792 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:20.737847 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.741755 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.745506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:20.745577 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:20.778364 1225677 cri.go:89] found id: ""
	I1217 01:31:20.778386 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.778394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:20.778400 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:20.778458 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:20.807237 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.807262 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:20.807267 1225677 cri.go:89] found id: ""
	I1217 01:31:20.807275 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:20.807361 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.811689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.815755 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:20.815857 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:20.842433 1225677 cri.go:89] found id: ""
	I1217 01:31:20.842454 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.842464 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:20.842470 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:20.842526 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:20.869792 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:20.869821 1225677 cri.go:89] found id: ""
	I1217 01:31:20.869831 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:20.869887 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:20.873765 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:20.873847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:20.900911 1225677 cri.go:89] found id: ""
	I1217 01:31:20.900940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:20.900952 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:20.900961 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:20.900974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:20.954883 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:20.954920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:21.002822 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:21.002852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:21.108368 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:21.108406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:21.135557 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:21.135588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:21.176576 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:21.176610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:21.205927 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:21.205961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:21.232870 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:21.232897 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:21.312344 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:21.312377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:21.333806 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:21.333836 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:21.415860 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:21.407804    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.408657    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410244    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.410552    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:21.412067    1820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:21.415895 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:21.415909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:23.961577 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:23.974520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:23.974616 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:24.008513 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.008538 1225677 cri.go:89] found id: ""
	I1217 01:31:24.008548 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:24.008627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.013203 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:24.013311 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:24.041344 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.041369 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.041374 1225677 cri.go:89] found id: ""
	I1217 01:31:24.041383 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:24.041499 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.045778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.049690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:24.049764 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:24.076869 1225677 cri.go:89] found id: ""
	I1217 01:31:24.076902 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.076912 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:24.076919 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:24.076982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:24.115429 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.115504 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.115535 1225677 cri.go:89] found id: ""
	I1217 01:31:24.115571 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:24.115649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.121035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.126165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:24.126286 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:24.153228 1225677 cri.go:89] found id: ""
	I1217 01:31:24.153253 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.153262 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:24.153268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:24.153326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:24.196715 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:24.196801 1225677 cri.go:89] found id: ""
	I1217 01:31:24.196825 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:24.196912 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:24.201554 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:24.201642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:24.230189 1225677 cri.go:89] found id: ""
	I1217 01:31:24.230214 1225677 logs.go:282] 0 containers: []
	W1217 01:31:24.230223 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:24.230232 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:24.230244 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:24.308144 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:24.308188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:24.326634 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:24.326664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:24.400916 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:24.391608    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.392652    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.393426    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.395665    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:24.396710    1904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:24.400938 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:24.400952 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:24.448701 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:24.448743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:24.482276 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:24.482309 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:24.515534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:24.515567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:24.625661 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:24.625708 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:24.652399 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:24.652439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:24.693518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:24.693556 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:24.750020 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:24.750059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.278748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:27.290609 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:27.290689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:27.316966 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.316991 1225677 cri.go:89] found id: ""
	I1217 01:31:27.316999 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:27.317054 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.320866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:27.320938 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:27.347398 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.347422 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.347427 1225677 cri.go:89] found id: ""
	I1217 01:31:27.347436 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:27.347496 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.351488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.355369 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:27.355442 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:27.381534 1225677 cri.go:89] found id: ""
	I1217 01:31:27.381564 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.381574 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:27.381580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:27.381662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:27.410739 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.410810 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.410822 1225677 cri.go:89] found id: ""
	I1217 01:31:27.410831 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:27.410892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.415095 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.419246 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:27.419364 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:27.447586 1225677 cri.go:89] found id: ""
	I1217 01:31:27.447612 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.447622 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:27.447629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:27.447693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:27.474916 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.474941 1225677 cri.go:89] found id: ""
	I1217 01:31:27.474950 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:27.475035 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:27.479118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:27.479203 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:27.506051 1225677 cri.go:89] found id: ""
	I1217 01:31:27.506078 1225677 logs.go:282] 0 containers: []
	W1217 01:31:27.506087 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:27.506097 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:27.506108 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:27.545535 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:27.545568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:27.641749 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:27.641830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:27.661191 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:27.661226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:27.738097 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:27.729735    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.730555    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732280    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.732819    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:27.734412    2053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:27.738120 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:27.738134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:27.782011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:27.782048 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:27.834514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:27.834550 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:27.905140 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:27.905177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:27.940830 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:27.940862 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:27.969106 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:27.969136 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:27.998807 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:27.998835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:30.578811 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:30.590365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:30.590444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:30.618562 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:30.618585 1225677 cri.go:89] found id: ""
	I1217 01:31:30.618594 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:30.618677 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.623874 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:30.624003 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:30.654712 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:30.654734 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.654740 1225677 cri.go:89] found id: ""
	I1217 01:31:30.654747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:30.654831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.658663 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.662256 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:30.662333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:30.690956 1225677 cri.go:89] found id: ""
	I1217 01:31:30.690983 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.691000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:30.691008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:30.691073 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:30.720079 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.720104 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.720110 1225677 cri.go:89] found id: ""
	I1217 01:31:30.720118 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:30.720190 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.724290 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.728443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:30.728569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:30.762597 1225677 cri.go:89] found id: ""
	I1217 01:31:30.762665 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.762683 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:30.762690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:30.762769 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:30.793999 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:30.794022 1225677 cri.go:89] found id: ""
	I1217 01:31:30.794031 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:30.794087 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:30.798031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:30.798111 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:30.825811 1225677 cri.go:89] found id: ""
	I1217 01:31:30.825838 1225677 logs.go:282] 0 containers: []
	W1217 01:31:30.825848 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:30.825858 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:30.825900 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:30.874308 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:30.874349 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:30.932548 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:30.932596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:30.973410 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:30.973440 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:31.061854 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:31.061893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:31.081279 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:31.081308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:31.173788 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:31.165503    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.166121    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.167773    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.168352    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:31.169889    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:31.173816 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:31.173832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:31.203476 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:31.203507 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:31.242819 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:31.242857 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:31.270107 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:31.270137 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:31.301308 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:31.301338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:33.901065 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:33.913301 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:33.913455 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:33.945005 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:33.945033 1225677 cri.go:89] found id: ""
	I1217 01:31:33.945042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:33.945100 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.949030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:33.949099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:33.980996 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:33.981019 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:33.981024 1225677 cri.go:89] found id: ""
	I1217 01:31:33.981032 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:33.981090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.985533 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:33.989328 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:33.989424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:34.020066 1225677 cri.go:89] found id: ""
	I1217 01:31:34.020105 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.020115 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:34.020123 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:34.020214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:34.054526 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.054551 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.054558 1225677 cri.go:89] found id: ""
	I1217 01:31:34.054566 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:34.054628 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.058716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.062466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:34.062539 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:34.100752 1225677 cri.go:89] found id: ""
	I1217 01:31:34.100777 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.100787 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:34.100794 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:34.100856 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:34.133409 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.133431 1225677 cri.go:89] found id: ""
	I1217 01:31:34.133440 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:34.133498 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:34.137315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:34.137386 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:34.169015 1225677 cri.go:89] found id: ""
	I1217 01:31:34.169048 1225677 logs.go:282] 0 containers: []
	W1217 01:31:34.169058 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:34.169068 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:34.169081 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:34.230112 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:34.230152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:34.275030 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:34.275071 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:34.303312 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:34.303341 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:34.323613 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:34.323791 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:34.377596 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:34.377632 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:34.405931 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:34.405961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:34.485309 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:34.485348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:34.537697 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:34.537780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:34.640362 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:34.640409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:34.719202 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:34.710848    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.711746    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713278    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.713853    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:34.715382    2377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:34.719227 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:34.719241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.248692 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:37.259883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:37.259952 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:37.288047 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.288071 1225677 cri.go:89] found id: ""
	I1217 01:31:37.288092 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:37.288147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.291723 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:37.291791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:37.320405 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.320468 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:37.320473 1225677 cri.go:89] found id: ""
	I1217 01:31:37.320481 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:37.320536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.324331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.327725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:37.327795 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:37.353914 1225677 cri.go:89] found id: ""
	I1217 01:31:37.353940 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.353949 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:37.353956 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:37.354033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:37.380050 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.380082 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:37.380088 1225677 cri.go:89] found id: ""
	I1217 01:31:37.380097 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:37.380169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.384466 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.388616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:37.388737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:37.434167 1225677 cri.go:89] found id: ""
	I1217 01:31:37.434203 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.434213 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:37.434235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:37.434327 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:37.463397 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.463418 1225677 cri.go:89] found id: ""
	I1217 01:31:37.463426 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:37.463501 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:37.467357 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:37.467429 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:37.496476 1225677 cri.go:89] found id: ""
	I1217 01:31:37.496504 1225677 logs.go:282] 0 containers: []
	W1217 01:31:37.496514 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:37.496523 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:37.496534 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:37.580269 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:37.580312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:37.598989 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:37.599020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:37.669887 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:37.661768    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.662555    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664201    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.664822    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:37.666286    2460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:37.669956 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:37.669985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:37.696910 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:37.696934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:37.741514 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:37.741546 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:37.797620 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:37.797657 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:37.827250 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:37.827277 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:37.860098 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:37.860127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:37.981956 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:37.982003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:38.045819 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:38.045855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.580761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:40.592635 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:40.592708 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:40.620832 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:40.620856 1225677 cri.go:89] found id: ""
	I1217 01:31:40.620866 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:40.620942 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.624827 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:40.624914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:40.662358 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.662381 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.662386 1225677 cri.go:89] found id: ""
	I1217 01:31:40.662394 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:40.662452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.666347 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.669969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:40.670068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:40.698897 1225677 cri.go:89] found id: ""
	I1217 01:31:40.698922 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.698931 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:40.698938 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:40.699026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:40.726184 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.726254 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:40.726265 1225677 cri.go:89] found id: ""
	I1217 01:31:40.726273 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:40.726331 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.730221 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.734070 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:40.734150 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:40.760090 1225677 cri.go:89] found id: ""
	I1217 01:31:40.760116 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.760125 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:40.760185 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:40.760251 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:40.790670 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:40.790693 1225677 cri.go:89] found id: ""
	I1217 01:31:40.790702 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:40.790754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:40.794861 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:40.794936 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:40.826103 1225677 cri.go:89] found id: ""
	I1217 01:31:40.826129 1225677 logs.go:282] 0 containers: []
	W1217 01:31:40.826138 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:40.826147 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:40.826160 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:40.878987 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:40.879066 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:40.924714 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:40.924751 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:40.980944 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:40.980981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:41.072994 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:41.073031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:41.105014 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:41.105042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:41.212780 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:41.212818 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:41.241014 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:41.241042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:41.277652 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:41.277684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:41.308943 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:41.308972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:41.328092 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:41.328123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:41.410133 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:41.401943    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.402640    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404144    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.404740    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:41.406273    2659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:43.911410 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:43.924272 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:43.924351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:43.953227 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:43.953252 1225677 cri.go:89] found id: ""
	I1217 01:31:43.953261 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:43.953337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.957558 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:43.957674 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:43.984394 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:43.984493 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:43.984513 1225677 cri.go:89] found id: ""
	I1217 01:31:43.984547 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:43.984626 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.988727 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:43.992395 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:43.992531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:44.023165 1225677 cri.go:89] found id: ""
	I1217 01:31:44.023242 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.023265 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:44.023285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:44.023376 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:44.056175 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.056249 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.056268 1225677 cri.go:89] found id: ""
	I1217 01:31:44.056293 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:44.056373 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.060006 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.063548 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:44.063623 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:44.091849 1225677 cri.go:89] found id: ""
	I1217 01:31:44.091875 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.091886 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:44.091892 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:44.091950 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:44.125771 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.125837 1225677 cri.go:89] found id: ""
	I1217 01:31:44.125861 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:44.125938 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:44.129707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:44.129781 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:44.157267 1225677 cri.go:89] found id: ""
	I1217 01:31:44.157343 1225677 logs.go:282] 0 containers: []
	W1217 01:31:44.157359 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:44.157369 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:44.157380 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:44.179921 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:44.180042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:44.227426 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:44.227495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:44.268056 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:44.268089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:44.312908 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:44.312943 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:44.344639 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:44.344673 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:44.370623 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:44.370650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:44.400984 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:44.401017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:44.494253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:44.494291 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:44.563778 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:44.555586    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.556392    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558149    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.558437    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:44.559894    2780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:44.563859 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:44.563887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:44.630776 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:44.630812 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.217775 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:47.228858 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:47.228999 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:47.258264 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.258287 1225677 cri.go:89] found id: ""
	I1217 01:31:47.258305 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:47.258366 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.262265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:47.262366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:47.293485 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.293508 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.293552 1225677 cri.go:89] found id: ""
	I1217 01:31:47.293562 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:47.293623 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.297395 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.300792 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:47.300866 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:47.329792 1225677 cri.go:89] found id: ""
	I1217 01:31:47.329818 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.329827 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:47.329833 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:47.329890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:47.356681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.356747 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:47.356758 1225677 cri.go:89] found id: ""
	I1217 01:31:47.356767 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:47.356839 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.360948 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.364494 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:47.364598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:47.390993 1225677 cri.go:89] found id: ""
	I1217 01:31:47.391021 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.391031 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:47.391037 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:47.391099 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:47.417453 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.417517 1225677 cri.go:89] found id: ""
	I1217 01:31:47.417541 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:47.417618 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:47.421365 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:47.421437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:47.447227 1225677 cri.go:89] found id: ""
	I1217 01:31:47.447254 1225677 logs.go:282] 0 containers: []
	W1217 01:31:47.447264 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:47.447273 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:47.447285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:47.474445 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:47.474475 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:47.546929 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:47.539495    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.539987    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541410    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.541727    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:47.543180    2870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:47.546947 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:47.546962 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:47.621943 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:47.621985 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:47.653654 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:47.653679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:47.751509 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:47.751548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:47.773290 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:47.773323 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:47.802347 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:47.802378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:47.849646 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:47.849680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:47.894275 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:47.894315 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:47.949242 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:47.949281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.480769 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:50.491711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:50.491827 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:50.519320 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.519345 1225677 cri.go:89] found id: ""
	I1217 01:31:50.519353 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:50.519440 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.523424 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:50.523533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:50.551627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:50.551652 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:50.551658 1225677 cri.go:89] found id: ""
	I1217 01:31:50.551665 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:50.551751 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.555585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.559244 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:50.559347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:50.586218 1225677 cri.go:89] found id: ""
	I1217 01:31:50.586241 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.586249 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:50.586255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:50.586333 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:50.618629 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.618661 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.618667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.618675 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:50.618776 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.622850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.626687 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:50.626824 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:50.659667 1225677 cri.go:89] found id: ""
	I1217 01:31:50.659703 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.659713 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:50.659738 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:50.659817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:50.686997 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.687069 1225677 cri.go:89] found id: ""
	I1217 01:31:50.687092 1225677 logs.go:282] 1 containers: [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:50.687160 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:50.690709 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:50.690823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:50.721432 1225677 cri.go:89] found id: ""
	I1217 01:31:50.721509 1225677 logs.go:282] 0 containers: []
	W1217 01:31:50.721534 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:50.721553 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:50.721583 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:50.748223 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:50.748250 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:50.807290 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:50.807328 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:50.835575 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:50.835603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:50.861513 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:50.861539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:50.937079 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:50.937118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:51.023701 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:51.014086    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.014500    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.016983    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.017968    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:51.019585    3031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:51.023722 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:51.023736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:51.063322 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:51.063360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:51.134936 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:51.134983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:51.172581 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:51.172611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:51.279920 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:51.279958 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:53.800293 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:53.813493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:53.813572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:53.855699 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:53.855727 1225677 cri.go:89] found id: ""
	I1217 01:31:53.855737 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:53.855790 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.860842 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:53.860915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:53.905688 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:53.905715 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:53.905720 1225677 cri.go:89] found id: ""
	I1217 01:31:53.905727 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:53.905796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.911027 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:53.916033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:53.916105 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:53.971312 1225677 cri.go:89] found id: ""
	I1217 01:31:53.971339 1225677 logs.go:282] 0 containers: []
	W1217 01:31:53.971349 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:53.971356 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:53.971477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:54.021427 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.021456 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:54.021474 1225677 cri.go:89] found id: ""
	I1217 01:31:54.021488 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:54.021585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.030798 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.035177 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:54.035371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:54.113099 1225677 cri.go:89] found id: ""
	I1217 01:31:54.113124 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.113133 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:54.113139 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:54.113246 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:54.166627 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.166651 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.166658 1225677 cri.go:89] found id: ""
	I1217 01:31:54.166665 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:54.166783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.171754 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:54.182182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:54.182283 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:54.234503 1225677 cri.go:89] found id: ""
	I1217 01:31:54.234567 1225677 logs.go:282] 0 containers: []
	W1217 01:31:54.234591 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:54.234615 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:54.234642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:54.275461 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:54.275532 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:54.366758 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:54.366801 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:54.403474 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:54.403513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:54.422090 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:54.422131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:54.486461 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:54.486497 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:54.553429 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:54.553466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:54.599563 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:54.599593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:54.706755 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:54.706795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:54.812798 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:54.804605    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.805386    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807100    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.807609    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:54.809207    3192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:54.812822 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:54.812835 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:54.838401 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:54.838433 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:54.893784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:54.893823 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.427168 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:31:57.438551 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:31:57.438655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:31:57.468636 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:57.468660 1225677 cri.go:89] found id: ""
	I1217 01:31:57.468669 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:31:57.468726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.472745 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:31:57.472819 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:31:57.500682 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:31:57.500702 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.500707 1225677 cri.go:89] found id: ""
	I1217 01:31:57.500714 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:31:57.500777 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.504719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.508458 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:31:57.508557 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:31:57.540789 1225677 cri.go:89] found id: ""
	I1217 01:31:57.540813 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.540822 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:31:57.540828 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:31:57.540889 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:31:57.570366 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.570392 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:57.570398 1225677 cri.go:89] found id: ""
	I1217 01:31:57.570406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:31:57.570462 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.574531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.578702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:31:57.578782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:31:57.608017 1225677 cri.go:89] found id: ""
	I1217 01:31:57.608042 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.608051 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:31:57.608058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:31:57.608122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:31:57.634195 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:57.634218 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.634224 1225677 cri.go:89] found id: ""
	I1217 01:31:57.634232 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:31:57.634317 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.638339 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:31:57.642068 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:31:57.642166 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:31:57.669214 1225677 cri.go:89] found id: ""
	I1217 01:31:57.669250 1225677 logs.go:282] 0 containers: []
	W1217 01:31:57.669259 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:31:57.669268 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:31:57.669284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:31:57.733958 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:31:57.733991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:31:57.790688 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:31:57.790731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:31:57.825378 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:31:57.825409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:31:57.903425 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:31:57.903465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:31:57.977243 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:31:57.969023    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.970010    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971635    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.971960    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:31:57.973498    3311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:31:57.977266 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:31:57.977280 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:31:58.008228 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:31:58.008262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:31:58.044832 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:31:58.044861 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:31:58.076961 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:31:58.077009 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:31:58.174022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:31:58.174061 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:31:58.194526 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:31:58.194561 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:31:58.225629 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:31:58.225658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.768659 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:00.779781 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:00.779855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:00.809961 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:00.809984 1225677 cri.go:89] found id: ""
	I1217 01:32:00.809993 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:00.810055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.814113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:00.814232 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:00.842110 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:00.842179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:00.842193 1225677 cri.go:89] found id: ""
	I1217 01:32:00.842202 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:00.842259 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.846284 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.850463 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:00.850535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:00.877321 1225677 cri.go:89] found id: ""
	I1217 01:32:00.877347 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.877357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:00.877364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:00.877424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:00.903950 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:00.904025 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:00.904044 1225677 cri.go:89] found id: ""
	I1217 01:32:00.904065 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:00.904183 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.907995 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.911685 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:00.911762 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:00.940826 1225677 cri.go:89] found id: ""
	I1217 01:32:00.940856 1225677 logs.go:282] 0 containers: []
	W1217 01:32:00.940865 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:00.940871 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:00.940931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:00.967056 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:00.967077 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:00.967088 1225677 cri.go:89] found id: ""
	I1217 01:32:00.967097 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:00.967175 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.970953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:00.975717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:00.975791 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:01.010237 1225677 cri.go:89] found id: ""
	I1217 01:32:01.010262 1225677 logs.go:282] 0 containers: []
	W1217 01:32:01.010272 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:01.010281 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:01.010294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:01.030320 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:01.030353 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:01.055381 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:01.055409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:01.097515 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:01.097548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:01.166756 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:01.166797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:01.208792 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:01.208824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:01.246024 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:01.246056 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:01.340436 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:01.340519 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:01.412662 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:01.403637    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.404391    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406195    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.406915    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:01.408629    3477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:01.412684 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:01.412699 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:01.467190 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:01.467228 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:01.500459 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:01.500486 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:01.531449 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:01.531477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:04.134627 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:04.145902 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:04.145978 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:04.185746 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.185766 1225677 cri.go:89] found id: ""
	I1217 01:32:04.185774 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:04.185831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.189797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:04.189867 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:04.228673 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.228694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.228698 1225677 cri.go:89] found id: ""
	I1217 01:32:04.228706 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:04.228759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.233260 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.238075 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:04.238212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:04.268955 1225677 cri.go:89] found id: ""
	I1217 01:32:04.268983 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.268992 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:04.268999 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:04.269102 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:04.299973 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.300041 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.300061 1225677 cri.go:89] found id: ""
	I1217 01:32:04.300088 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:04.300185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.303813 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.307456 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:04.307533 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:04.334293 1225677 cri.go:89] found id: ""
	I1217 01:32:04.334319 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.334331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:04.334338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:04.334398 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:04.360886 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.360906 1225677 cri.go:89] found id: "53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.360910 1225677 cri.go:89] found id: ""
	I1217 01:32:04.360918 1225677 logs.go:282] 2 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9]
	I1217 01:32:04.360974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.365024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:04.368933 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:04.369005 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:04.397116 1225677 cri.go:89] found id: ""
	I1217 01:32:04.397140 1225677 logs.go:282] 0 containers: []
	W1217 01:32:04.397149 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:04.397159 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:04.397174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:04.490637 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:04.490721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:04.531861 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:04.531938 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:04.577801 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:04.577838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:04.635487 1225677 logs.go:123] Gathering logs for kube-controller-manager [53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9] ...
	I1217 01:32:04.635524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 53b3f8ad32b886fc82f4d92d4670ef3a31d7f69b04817ef5b86fdd627dc599c9"
	I1217 01:32:04.667260 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:04.667290 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:04.718117 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:04.718146 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:04.737680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:04.737711 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:04.825872 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:04.817699    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.818465    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.819921    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.820558    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:04.822046    3619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:04.825894 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:04.825908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:04.858804 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:04.858833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:04.887920 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:04.887953 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:04.916371 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:04.916476 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:07.492728 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:07.504442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:07.504532 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:07.538372 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.538403 1225677 cri.go:89] found id: ""
	I1217 01:32:07.538442 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:07.538517 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.542523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:07.542597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:07.576339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:07.576360 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:07.576364 1225677 cri.go:89] found id: ""
	I1217 01:32:07.576372 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:07.576459 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.580149 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.584111 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:07.584196 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:07.610578 1225677 cri.go:89] found id: ""
	I1217 01:32:07.610605 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.610614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:07.610621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:07.610678 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:07.637129 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:07.637151 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:07.637157 1225677 cri.go:89] found id: ""
	I1217 01:32:07.637164 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:07.637217 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.641090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.644872 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:07.644992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:07.679300 1225677 cri.go:89] found id: ""
	I1217 01:32:07.679322 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.679331 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:07.679350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:07.679419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:07.719129 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:07.719155 1225677 cri.go:89] found id: ""
	I1217 01:32:07.719164 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:07.719231 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:07.723681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:07.723755 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:07.756924 1225677 cri.go:89] found id: ""
	I1217 01:32:07.756950 1225677 logs.go:282] 0 containers: []
	W1217 01:32:07.756969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:07.756979 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:07.756991 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:07.856049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:07.856088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:07.935429 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:07.926499    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.927333    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929065    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.929922    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:07.931509    3717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:07.935456 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:07.935469 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:07.961013 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:07.961042 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:08.005989 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:08.006024 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:08.039061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:08.039092 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:08.058159 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:08.058194 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:08.112456 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:08.112490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:08.176389 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:08.176457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:08.215782 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:08.215809 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:08.244713 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:08.244743 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:10.828143 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:10.838717 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:10.838793 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:10.869672 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:10.869696 1225677 cri.go:89] found id: ""
	I1217 01:32:10.869705 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:10.869761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.873603 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:10.873720 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:10.900811 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:10.900837 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:10.900843 1225677 cri.go:89] found id: ""
	I1217 01:32:10.900851 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:10.900906 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.904643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.908193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:10.908261 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:10.935598 1225677 cri.go:89] found id: ""
	I1217 01:32:10.935624 1225677 logs.go:282] 0 containers: []
	W1217 01:32:10.935634 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:10.935641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:10.935698 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:10.966869 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:10.966894 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:10.966899 1225677 cri.go:89] found id: ""
	I1217 01:32:10.966907 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:10.966962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.970920 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:10.974605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:10.974715 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:11.012577 1225677 cri.go:89] found id: ""
	I1217 01:32:11.012602 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.012612 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:11.012618 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:11.012680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:11.048075 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.048100 1225677 cri.go:89] found id: ""
	I1217 01:32:11.048130 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:11.048185 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:11.052014 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:11.052089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:11.084486 1225677 cri.go:89] found id: ""
	I1217 01:32:11.084511 1225677 logs.go:282] 0 containers: []
	W1217 01:32:11.084524 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:11.084533 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:11.084545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:11.192042 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:11.192076 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:11.218345 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:11.218378 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:11.261837 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:11.261869 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:11.321100 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:11.321138 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:11.356360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:11.356390 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:11.433012 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:11.433054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:11.511248 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:11.502020    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.502741    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.504411    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.505125    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:11.506872    3883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:11.511270 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:11.511287 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:11.549584 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:11.549614 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:11.596753 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:11.596786 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:11.626208 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:11.626240 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.173611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:14.187629 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:14.187704 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:14.223146 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.223170 1225677 cri.go:89] found id: ""
	I1217 01:32:14.223179 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:14.223264 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.227607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:14.227721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:14.255753 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:14.255791 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.255796 1225677 cri.go:89] found id: ""
	I1217 01:32:14.255804 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:14.255881 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.259963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.263644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:14.263717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:14.290575 1225677 cri.go:89] found id: ""
	I1217 01:32:14.290599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.290614 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:14.290621 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:14.290681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:14.318287 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.318309 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.318314 1225677 cri.go:89] found id: ""
	I1217 01:32:14.318323 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:14.318378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.322352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.326073 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:14.326157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:14.352179 1225677 cri.go:89] found id: ""
	I1217 01:32:14.352205 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.352214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:14.352221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:14.352304 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:14.380539 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.380565 1225677 cri.go:89] found id: ""
	I1217 01:32:14.380582 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:14.380678 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:14.385134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:14.385210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:14.417374 1225677 cri.go:89] found id: ""
	I1217 01:32:14.417407 1225677 logs.go:282] 0 containers: []
	W1217 01:32:14.417417 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:14.417441 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:14.417457 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:14.464173 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:14.464209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:14.491958 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:14.492035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:14.547112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:14.547180 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:14.617502 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:14.608513    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.609388    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611073    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.611400    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:14.612965    4016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:14.617525 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:14.617548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:14.645669 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:14.645697 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:14.705027 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:14.705070 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:14.738615 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:14.738689 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:14.819881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:14.819961 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:14.917702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:14.917739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:14.940092 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:14.940127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.482077 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:17.493126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:17.493227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:17.520116 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.520137 1225677 cri.go:89] found id: ""
	I1217 01:32:17.520155 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:17.520234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.524492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:17.524572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:17.553355 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:17.553419 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:17.553439 1225677 cri.go:89] found id: ""
	I1217 01:32:17.553454 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:17.553512 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.557145 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.560580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:17.560663 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:17.586798 1225677 cri.go:89] found id: ""
	I1217 01:32:17.586824 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.586843 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:17.586850 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:17.586915 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:17.614063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.614096 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:17.614102 1225677 cri.go:89] found id: ""
	I1217 01:32:17.614110 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:17.614174 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.618083 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.621593 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:17.621662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:17.652917 1225677 cri.go:89] found id: ""
	I1217 01:32:17.652943 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.652964 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:17.652972 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:17.653029 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:17.679412 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.679435 1225677 cri.go:89] found id: ""
	I1217 01:32:17.679443 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:17.679508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:17.683530 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:17.683606 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:17.714591 1225677 cri.go:89] found id: ""
	I1217 01:32:17.714618 1225677 logs.go:282] 0 containers: []
	W1217 01:32:17.714628 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:17.714638 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:17.714652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:17.774158 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:17.774193 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:17.802731 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:17.802759 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:17.837385 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:17.837413 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:17.948723 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:17.948766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:17.967594 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:17.967622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:17.997257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:17.997350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:18.046163 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:18.046204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:18.075264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:18.075345 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:18.179955 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:18.180007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:18.261983 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:18.253698    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.254348    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.255955    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.256597    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:18.258222    4180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:18.262017 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:18.262034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.814850 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:20.826637 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:20.826710 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:20.867818 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:20.867839 1225677 cri.go:89] found id: ""
	I1217 01:32:20.867847 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:20.867902 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.871814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:20.871895 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:20.902722 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:20.902742 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:20.902746 1225677 cri.go:89] found id: ""
	I1217 01:32:20.902755 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:20.902808 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.907236 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.911156 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:20.911230 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:20.937933 1225677 cri.go:89] found id: ""
	I1217 01:32:20.937959 1225677 logs.go:282] 0 containers: []
	W1217 01:32:20.937968 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:20.937974 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:20.938063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:20.965558 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:20.965581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:20.965587 1225677 cri.go:89] found id: ""
	I1217 01:32:20.965595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:20.965652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.969565 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:20.973428 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:20.973498 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:21.012487 1225677 cri.go:89] found id: ""
	I1217 01:32:21.012512 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.012521 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:21.012527 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:21.012590 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:21.041411 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.041443 1225677 cri.go:89] found id: ""
	I1217 01:32:21.041455 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:21.041515 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:21.045571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:21.045672 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:21.074982 1225677 cri.go:89] found id: ""
	I1217 01:32:21.075005 1225677 logs.go:282] 0 containers: []
	W1217 01:32:21.075014 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:21.075023 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:21.075036 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:21.105151 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:21.105181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:21.131324 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:21.131398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:21.228426 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:21.228461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:21.285988 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:21.286020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:21.369964 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:21.370005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:21.406263 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:21.406295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:21.425680 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:21.425710 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:21.503044 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:21.494646    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.495325    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.496896    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.497537    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:21.499265    4299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:21.503067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:21.503083 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:21.533119 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:21.533147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:21.584619 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:21.584652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.145239 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:24.156031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:24.156112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:24.191491 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.191515 1225677 cri.go:89] found id: ""
	I1217 01:32:24.191523 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:24.191579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.196271 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:24.196344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:24.229412 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.229433 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.229437 1225677 cri.go:89] found id: ""
	I1217 01:32:24.229445 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:24.229502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.233353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.237055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:24.237137 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:24.264226 1225677 cri.go:89] found id: ""
	I1217 01:32:24.264252 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.264262 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:24.264268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:24.264330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:24.300946 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.300972 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.300977 1225677 cri.go:89] found id: ""
	I1217 01:32:24.300984 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:24.301038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.304900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.308160 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:24.308277 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:24.334573 1225677 cri.go:89] found id: ""
	I1217 01:32:24.334596 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.334606 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:24.334612 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:24.334670 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:24.367769 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.367791 1225677 cri.go:89] found id: ""
	I1217 01:32:24.367800 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:24.367853 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:24.371482 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:24.371586 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:24.398071 1225677 cri.go:89] found id: ""
	I1217 01:32:24.398095 1225677 logs.go:282] 0 containers: []
	W1217 01:32:24.398104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:24.398112 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:24.398124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:24.466998 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:24.458849    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.459480    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461088    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.461552    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:24.463254    4391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:24.467073 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:24.467093 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:24.494797 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:24.494826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:24.566818 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:24.566859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:24.627760 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:24.627797 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:24.657250 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:24.657278 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:24.683514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:24.683549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:24.703093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:24.703129 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:24.757376 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:24.757411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:24.839791 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:24.839826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:24.883947 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:24.883978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:27.492559 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:27.503372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:27.503445 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:27.541590 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:27.541611 1225677 cri.go:89] found id: ""
	I1217 01:32:27.541620 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:27.541675 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.545373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:27.545448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:27.571462 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:27.571486 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:27.571491 1225677 cri.go:89] found id: ""
	I1217 01:32:27.571499 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:27.571555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.575671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.579240 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:27.579332 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:27.612215 1225677 cri.go:89] found id: ""
	I1217 01:32:27.612245 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.612254 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:27.612261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:27.612339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:27.639672 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:27.639696 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.639701 1225677 cri.go:89] found id: ""
	I1217 01:32:27.639708 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:27.639782 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.643953 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.647820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:27.647942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:27.673115 1225677 cri.go:89] found id: ""
	I1217 01:32:27.673141 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.673150 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:27.673157 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:27.673215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:27.703404 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.703428 1225677 cri.go:89] found id: ""
	I1217 01:32:27.703437 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:27.703566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:27.708031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:27.708106 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:27.736748 1225677 cri.go:89] found id: ""
	I1217 01:32:27.736770 1225677 logs.go:282] 0 containers: []
	W1217 01:32:27.736779 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:27.736789 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:27.736802 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:27.763699 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:27.763727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:27.790990 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:27.791020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:27.871644 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:27.871680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:27.904392 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:27.904499 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:27.926297 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:27.926333 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:28.002149 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:27.991733    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.992595    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994231    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.994591    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:27.996141    4560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:28.002177 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:28.002196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:28.030901 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:28.030933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:28.070431 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:28.070463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:28.124957 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:28.124994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:28.185427 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:28.185465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:30.787761 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:30.798953 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:30.799025 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:30.826532 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:30.826561 1225677 cri.go:89] found id: ""
	I1217 01:32:30.826570 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:30.826631 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.830429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:30.830503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:30.856397 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:30.856449 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:30.856462 1225677 cri.go:89] found id: ""
	I1217 01:32:30.856470 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:30.856524 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.860460 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.864121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:30.864204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:30.893119 1225677 cri.go:89] found id: ""
	I1217 01:32:30.893143 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.893153 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:30.893166 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:30.893225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:30.942371 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:30.942393 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:30.942398 1225677 cri.go:89] found id: ""
	I1217 01:32:30.942406 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:30.942463 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.947748 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:30.953053 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:30.953140 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:30.991763 1225677 cri.go:89] found id: ""
	I1217 01:32:30.991793 1225677 logs.go:282] 0 containers: []
	W1217 01:32:30.991802 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:30.991817 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:30.991888 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:31.026936 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.026958 1225677 cri.go:89] found id: ""
	I1217 01:32:31.026967 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:31.027022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:31.031253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:31.031338 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:31.060606 1225677 cri.go:89] found id: ""
	I1217 01:32:31.060632 1225677 logs.go:282] 0 containers: []
	W1217 01:32:31.060641 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:31.060650 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:31.060666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:31.089805 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:31.089837 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:31.179774 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:31.179814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:31.231705 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:31.231739 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:31.264982 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:31.265014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:31.295319 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:31.295348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:31.398598 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:31.398635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:31.418439 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:31.418473 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:31.505328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:31.497591    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.498213    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.499716    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.500165    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:31.501631    4705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:31.505348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:31.505364 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:31.534574 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:31.534604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:31.584571 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:31.584607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.145660 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:34.156555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:34.156680 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:34.189334 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.189353 1225677 cri.go:89] found id: ""
	I1217 01:32:34.189361 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:34.189415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.193025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:34.193117 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:34.229137 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.229160 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.229165 1225677 cri.go:89] found id: ""
	I1217 01:32:34.229176 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:34.229234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.232921 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.236260 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:34.236361 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:34.264990 1225677 cri.go:89] found id: ""
	I1217 01:32:34.265013 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.265022 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:34.265028 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:34.265086 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:34.292130 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.292205 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.292225 1225677 cri.go:89] found id: ""
	I1217 01:32:34.292250 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:34.292344 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.295987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.299388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:34.299500 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:34.325943 1225677 cri.go:89] found id: ""
	I1217 01:32:34.326026 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.326042 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:34.326049 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:34.326108 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:34.363328 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.363351 1225677 cri.go:89] found id: ""
	I1217 01:32:34.363361 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:34.363415 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:34.367803 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:34.367878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:34.394984 1225677 cri.go:89] found id: ""
	I1217 01:32:34.395011 1225677 logs.go:282] 0 containers: []
	W1217 01:32:34.395020 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:34.395029 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:34.395065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:34.470015 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:34.461494    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.462548    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464280    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.464837    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:34.466346    4801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:34.470036 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:34.470049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:34.496057 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:34.496091 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:34.549522 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:34.549555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:34.592693 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:34.592728 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:34.652425 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:34.652505 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:34.680716 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:34.680747 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:34.707492 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:34.707522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:34.787410 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:34.787492 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:34.892246 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:34.892284 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:34.910499 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:34.910530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:37.463203 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:37.474127 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:37.474200 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:37.506946 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.507018 1225677 cri.go:89] found id: ""
	I1217 01:32:37.507042 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:37.507123 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.511460 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:37.511535 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:37.546992 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:37.547014 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:37.547020 1225677 cri.go:89] found id: ""
	I1217 01:32:37.547028 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:37.547090 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.550864 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.554364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:37.554450 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:37.592224 1225677 cri.go:89] found id: ""
	I1217 01:32:37.592353 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.592394 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:37.592437 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:37.592579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:37.620557 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.620581 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:37.620587 1225677 cri.go:89] found id: ""
	I1217 01:32:37.620595 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:37.620691 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.624719 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.628465 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:37.628541 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:37.657843 1225677 cri.go:89] found id: ""
	I1217 01:32:37.657870 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.657878 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:37.657885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:37.657955 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:37.686792 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.686825 1225677 cri.go:89] found id: ""
	I1217 01:32:37.686834 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:37.686898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:37.690651 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:37.690783 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:37.719977 1225677 cri.go:89] found id: ""
	I1217 01:32:37.720000 1225677 logs.go:282] 0 containers: []
	W1217 01:32:37.720009 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:37.720018 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:37.720030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:37.738580 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:37.738610 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:37.814847 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:37.806596    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.807160    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.808944    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.809466    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:37.811020    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:37.814869 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:37.814883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:37.840694 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:37.840723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:37.901817 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:37.901855 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:37.935757 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:37.935839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:38.014642 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:38.014679 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:38.115079 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:38.115123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:38.157390 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:38.157423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:38.204086 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:38.204123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:38.235323 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:38.235355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:40.766175 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:40.777746 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:40.777818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:40.809026 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:40.809051 1225677 cri.go:89] found id: ""
	I1217 01:32:40.809060 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:40.809157 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.813212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:40.813294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:40.840793 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:40.840821 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:40.840826 1225677 cri.go:89] found id: ""
	I1217 01:32:40.840834 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:40.840915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.845018 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.848655 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:40.848732 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:40.875726 1225677 cri.go:89] found id: ""
	I1217 01:32:40.875750 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.875761 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:40.875767 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:40.875825 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:40.902504 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:40.902527 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:40.902532 1225677 cri.go:89] found id: ""
	I1217 01:32:40.902540 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:40.902593 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.906394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.910259 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:40.910330 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:40.936570 1225677 cri.go:89] found id: ""
	I1217 01:32:40.936599 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.936609 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:40.936616 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:40.936676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:40.964358 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:40.964381 1225677 cri.go:89] found id: ""
	I1217 01:32:40.964389 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:40.964541 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:40.968221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:40.968292 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:40.998606 1225677 cri.go:89] found id: ""
	I1217 01:32:40.998633 1225677 logs.go:282] 0 containers: []
	W1217 01:32:40.998644 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:40.998654 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:40.998668 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:41.022520 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:41.022551 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:41.051598 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:41.051625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:41.091115 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:41.091148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:41.159179 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:41.159223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:41.190970 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:41.190997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:41.225786 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:41.225815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:41.294484 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:41.286397    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.287188    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.288947    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.289536    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:41.290626    5123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:41.294509 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:41.294523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:41.346979 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:41.347017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:41.374095 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:41.374126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:41.456622 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:41.456658 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.066375 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:44.077293 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:44.077365 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:44.104332 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.104476 1225677 cri.go:89] found id: ""
	I1217 01:32:44.104504 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:44.104580 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.108715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:44.108799 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:44.140649 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.140672 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.140677 1225677 cri.go:89] found id: ""
	I1217 01:32:44.140684 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:44.140763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.144834 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.148730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:44.148811 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:44.197233 1225677 cri.go:89] found id: ""
	I1217 01:32:44.197259 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.197268 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:44.197274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:44.197350 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:44.240339 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:44.240363 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.240368 1225677 cri.go:89] found id: ""
	I1217 01:32:44.240376 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:44.240456 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.244962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.248793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:44.248913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:44.278464 1225677 cri.go:89] found id: ""
	I1217 01:32:44.278491 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.278501 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:44.278507 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:44.278585 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:44.308914 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.308938 1225677 cri.go:89] found id: ""
	I1217 01:32:44.308958 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:44.309048 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:44.313878 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:44.313951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:44.344530 1225677 cri.go:89] found id: ""
	I1217 01:32:44.344555 1225677 logs.go:282] 0 containers: []
	W1217 01:32:44.344577 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:44.344588 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:44.344600 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:44.372833 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:44.372864 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:44.452952 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:44.452990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:44.474609 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:44.474642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:44.552482 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:44.543209    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.543986    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546020    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.546680    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:44.548406    5228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:44.552507 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:44.552521 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:44.580322 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:44.580352 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:44.610292 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:44.610320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:44.643236 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:44.643266 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:44.755542 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:44.755601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:44.808715 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:44.808771 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:44.856301 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:44.856338 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.419847 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:47.431877 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:47.431951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:47.461659 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:47.461682 1225677 cri.go:89] found id: ""
	I1217 01:32:47.461690 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:47.461747 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.465698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:47.465822 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:47.495157 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.495179 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.495184 1225677 cri.go:89] found id: ""
	I1217 01:32:47.495192 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:47.495247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.499337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.503995 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:47.504080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:47.543135 1225677 cri.go:89] found id: ""
	I1217 01:32:47.543158 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.543167 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:47.543174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:47.543238 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:47.572765 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:47.572791 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:47.572797 1225677 cri.go:89] found id: ""
	I1217 01:32:47.572804 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:47.572867 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.577796 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.581659 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:47.581760 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:47.612595 1225677 cri.go:89] found id: ""
	I1217 01:32:47.612660 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.612674 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:47.612681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:47.612744 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:47.642199 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:47.642223 1225677 cri.go:89] found id: ""
	I1217 01:32:47.642231 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:47.642287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:47.646215 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:47.646285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:47.672805 1225677 cri.go:89] found id: ""
	I1217 01:32:47.672830 1225677 logs.go:282] 0 containers: []
	W1217 01:32:47.672839 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:47.672849 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:47.672859 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:47.702885 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:47.702917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:47.723284 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:47.723318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:47.799644 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:47.789320    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.790201    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792235    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.792946    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:47.795558    5369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:47.799674 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:47.799688 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:47.839852 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:47.839884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:47.888519 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:47.888557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:47.973305 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:47.973344 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:48.081814 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:48.081853 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:48.114561 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:48.114590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:48.208193 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:48.208234 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:48.241262 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:48.241293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.770940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:50.781882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:50.781951 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:50.809569 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:50.809595 1225677 cri.go:89] found id: ""
	I1217 01:32:50.809604 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:50.809665 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.814519 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:50.814594 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:50.849443 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:50.849472 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:50.849478 1225677 cri.go:89] found id: ""
	I1217 01:32:50.849486 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:50.849564 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.853510 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.857119 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:50.857224 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:50.888246 1225677 cri.go:89] found id: ""
	I1217 01:32:50.888275 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.888284 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:50.888291 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:50.888351 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:50.916294 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:50.916320 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:50.916326 1225677 cri.go:89] found id: ""
	I1217 01:32:50.916333 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:50.916388 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.920299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.924658 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:50.924730 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:50.957966 1225677 cri.go:89] found id: ""
	I1217 01:32:50.957994 1225677 logs.go:282] 0 containers: []
	W1217 01:32:50.958003 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:50.958009 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:50.958069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:50.991282 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:50.991304 1225677 cri.go:89] found id: ""
	I1217 01:32:50.991312 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:50.991377 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:50.995730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:50.995797 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:51.034122 1225677 cri.go:89] found id: ""
	I1217 01:32:51.034199 1225677 logs.go:282] 0 containers: []
	W1217 01:32:51.034238 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:51.034266 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:51.034295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:51.062022 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:51.062100 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:51.081698 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:51.081733 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:51.112382 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:51.112482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:51.172152 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:51.172190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:51.213603 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:51.213634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:51.297400 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:51.297439 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:51.331335 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:51.331412 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:51.426253 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:51.426289 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:51.499310 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:51.490841    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.491452    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.493387    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.494013    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:51.495680    5542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:51.499332 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:51.499348 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:51.572760 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:51.572795 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.122214 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:54.133644 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:54.133721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:54.162887 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.162912 1225677 cri.go:89] found id: ""
	I1217 01:32:54.162922 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:54.162978 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.167057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:54.167127 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:54.205900 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.205920 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.205925 1225677 cri.go:89] found id: ""
	I1217 01:32:54.205932 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:54.205987 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.210350 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.214343 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:54.214419 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:54.246321 1225677 cri.go:89] found id: ""
	I1217 01:32:54.246348 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.246357 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:54.246364 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:54.246424 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:54.276281 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.276305 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.276310 1225677 cri.go:89] found id: ""
	I1217 01:32:54.276319 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:54.276379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.281009 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.285204 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:54.285281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:54.311149 1225677 cri.go:89] found id: ""
	I1217 01:32:54.311225 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.311251 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:54.311268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:54.311342 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:54.339737 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.339763 1225677 cri.go:89] found id: ""
	I1217 01:32:54.339771 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:54.339825 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:54.343615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:54.343749 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:54.370945 1225677 cri.go:89] found id: ""
	I1217 01:32:54.370971 1225677 logs.go:282] 0 containers: []
	W1217 01:32:54.370981 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:54.370991 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:54.371003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:54.390464 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:54.390495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:54.470328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:54.459697    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.460469    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462032    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.462570    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:54.464141    5632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:54.470363 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:54.470377 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:54.495970 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:54.495999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:54.557300 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:54.557336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:54.585791 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:54.585821 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:54.612126 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:54.612152 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:54.653218 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:54.653246 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:54.752385 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:54.752432 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:54.814139 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:54.814175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:54.885191 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:54.885226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:57.468539 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:32:57.479841 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:32:57.479913 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:32:57.511032 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.511058 1225677 cri.go:89] found id: ""
	I1217 01:32:57.511067 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:32:57.511130 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.515373 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:32:57.515446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:32:57.558508 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.558531 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:57.558537 1225677 cri.go:89] found id: ""
	I1217 01:32:57.558550 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:32:57.558622 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.563150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.567245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:32:57.567322 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:32:57.594294 1225677 cri.go:89] found id: ""
	I1217 01:32:57.594330 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.594341 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:32:57.594347 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:32:57.594411 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:32:57.626077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:32:57.626100 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.626106 1225677 cri.go:89] found id: ""
	I1217 01:32:57.626114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:32:57.626173 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.630289 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.634055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:32:57.634130 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:32:57.661683 1225677 cri.go:89] found id: ""
	I1217 01:32:57.661711 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.661721 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:32:57.661727 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:32:57.661785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:32:57.690521 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.690556 1225677 cri.go:89] found id: ""
	I1217 01:32:57.690565 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:32:57.690632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:32:57.694587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:32:57.694687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:32:57.721760 1225677 cri.go:89] found id: ""
	I1217 01:32:57.721783 1225677 logs.go:282] 0 containers: []
	W1217 01:32:57.721792 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:32:57.721801 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:32:57.721830 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:32:57.749279 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:32:57.749308 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:32:57.781988 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:32:57.782017 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:32:57.820059 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:32:57.820089 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:32:57.841084 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:32:57.841121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:32:57.884653 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:32:57.884752 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:32:57.932570 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:32:57.932605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:32:58.015607 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:32:58.015649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:32:58.116442 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:32:58.116479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:32:58.205896 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:32:58.190706    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.191690    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193452    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.193884    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:32:58.200882    5815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:32:58.205921 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:32:58.205934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:32:58.252524 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:32:58.252595 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.831933 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:00.843915 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:00.844011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:00.872994 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:00.873018 1225677 cri.go:89] found id: ""
	I1217 01:33:00.873027 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:00.873080 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.876819 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:00.876914 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:00.904306 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:00.904329 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:00.904334 1225677 cri.go:89] found id: ""
	I1217 01:33:00.904342 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:00.904397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.908029 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.911563 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:00.911642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:00.940652 1225677 cri.go:89] found id: ""
	I1217 01:33:00.940678 1225677 logs.go:282] 0 containers: []
	W1217 01:33:00.940687 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:00.940694 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:00.940752 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:00.967462 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:00.967503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:00.967514 1225677 cri.go:89] found id: ""
	I1217 01:33:00.967522 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:00.967601 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.971689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:00.976107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:00.976187 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:01.015150 1225677 cri.go:89] found id: ""
	I1217 01:33:01.015230 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.015253 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:01.015273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:01.015366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:01.044488 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.044553 1225677 cri.go:89] found id: ""
	I1217 01:33:01.044578 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:01.044671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:01.048372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:01.048523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:01.083014 1225677 cri.go:89] found id: ""
	I1217 01:33:01.083096 1225677 logs.go:282] 0 containers: []
	W1217 01:33:01.083121 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:01.083173 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:01.083208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:01.181547 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:01.181588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:01.202930 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:01.202966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:01.255543 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:01.255580 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:01.282899 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:01.282927 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:01.310357 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:01.310387 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:01.361428 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:01.361458 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:01.439491 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:01.431673    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.432494    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434011    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.434457    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:01.435940    5945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:01.439564 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:01.439594 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:01.466548 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:01.466575 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:01.524293 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:01.524332 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:01.603276 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:01.603314 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.194004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:04.206859 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:04.206931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:04.245597 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.245621 1225677 cri.go:89] found id: ""
	I1217 01:33:04.245630 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:04.245688 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.249418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:04.249489 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:04.278257 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.278277 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.278284 1225677 cri.go:89] found id: ""
	I1217 01:33:04.278291 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:04.278405 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.282613 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.286801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:04.286878 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:04.313756 1225677 cri.go:89] found id: ""
	I1217 01:33:04.313825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.313852 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:04.313866 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:04.313946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:04.343505 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.343528 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.343533 1225677 cri.go:89] found id: ""
	I1217 01:33:04.343542 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:04.343595 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.347432 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.351245 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:04.351318 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:04.378415 1225677 cri.go:89] found id: ""
	I1217 01:33:04.378443 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.378453 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:04.378461 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:04.378523 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:04.404603 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.404635 1225677 cri.go:89] found id: ""
	I1217 01:33:04.404645 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:04.404699 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:04.408372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:04.408490 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:04.435025 1225677 cri.go:89] found id: ""
	I1217 01:33:04.435053 1225677 logs.go:282] 0 containers: []
	W1217 01:33:04.435063 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:04.435072 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:04.435084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:04.453398 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:04.453431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:04.532185 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:04.520495    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.521003    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522551    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.522885    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:04.524536    6046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:04.532207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:04.532220 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:04.565093 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:04.565122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:04.608097 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:04.608141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:04.669592 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:04.669635 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:04.698199 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:04.698230 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:04.781891 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:04.781933 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:04.889443 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:04.889483 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:04.935503 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:04.935540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:04.962255 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:04.962288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.497519 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:07.509544 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:07.509619 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:07.541912 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.541930 1225677 cri.go:89] found id: ""
	I1217 01:33:07.541938 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:07.541998 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.545880 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:07.545967 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:07.576061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.576085 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:07.576090 1225677 cri.go:89] found id: ""
	I1217 01:33:07.576098 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:07.576156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.580118 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.584118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:07.584216 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:07.613260 1225677 cri.go:89] found id: ""
	I1217 01:33:07.613288 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.613297 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:07.613304 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:07.613390 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:07.643089 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:07.643113 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:07.643118 1225677 cri.go:89] found id: ""
	I1217 01:33:07.643126 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:07.643181 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.646892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.650360 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:07.650433 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:07.677367 1225677 cri.go:89] found id: ""
	I1217 01:33:07.677393 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.677403 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:07.677409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:07.677515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:07.705475 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.705499 1225677 cri.go:89] found id: ""
	I1217 01:33:07.705508 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:07.705588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:07.709429 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:07.709538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:07.737814 1225677 cri.go:89] found id: ""
	I1217 01:33:07.737838 1225677 logs.go:282] 0 containers: []
	W1217 01:33:07.737846 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:07.737855 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:07.737867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:07.767138 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:07.767166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:07.800084 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:07.800165 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:07.820093 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:07.820124 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:07.887706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:07.879535    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.880148    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.881948    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.882544    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:07.883709    6203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:07.887729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:07.887744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:07.915091 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:07.915122 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:07.956054 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:07.956116 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:08.019066 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:08.019105 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:08.080377 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:08.080423 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:08.124710 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:08.124793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:08.214495 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:08.214593 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:10.827104 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:10.838284 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:10.838422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:10.874165 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:10.874184 1225677 cri.go:89] found id: ""
	I1217 01:33:10.874192 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:10.874245 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.878108 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:10.878180 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:10.903766 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:10.903789 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:10.903794 1225677 cri.go:89] found id: ""
	I1217 01:33:10.903802 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:10.903857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.907574 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.911142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:10.911214 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:10.938246 1225677 cri.go:89] found id: ""
	I1217 01:33:10.938273 1225677 logs.go:282] 0 containers: []
	W1217 01:33:10.938283 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:10.938289 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:10.938347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:10.964843 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:10.964866 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:10.964871 1225677 cri.go:89] found id: ""
	I1217 01:33:10.964879 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:10.964935 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.968730 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:10.972392 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:10.972503 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:11.008562 1225677 cri.go:89] found id: ""
	I1217 01:33:11.008590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.008600 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:11.008607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:11.008716 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:11.041307 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.041342 1225677 cri.go:89] found id: ""
	I1217 01:33:11.041352 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:11.041408 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:11.045319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:11.045394 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:11.072727 1225677 cri.go:89] found id: ""
	I1217 01:33:11.072757 1225677 logs.go:282] 0 containers: []
	W1217 01:33:11.072771 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:11.072781 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:11.072793 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:11.092411 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:11.092531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:11.173959 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:11.164849    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.165894    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.167687    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.168261    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:11.169629    6323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:11.173986 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:11.174000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:11.204098 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:11.204130 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:11.265126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:11.265169 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:11.329309 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:11.329350 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:11.366487 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:11.366516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:11.449439 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:11.449474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:11.493614 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:11.493648 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:11.530111 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:11.530142 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:11.573692 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:11.573724 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.175120 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:14.187102 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:14.187212 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:14.217900 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.217923 1225677 cri.go:89] found id: ""
	I1217 01:33:14.217933 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:14.217993 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.228556 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:14.228632 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:14.256615 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.256694 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.256731 1225677 cri.go:89] found id: ""
	I1217 01:33:14.256747 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:14.256855 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.260873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.264886 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:14.264982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:14.293944 1225677 cri.go:89] found id: ""
	I1217 01:33:14.294012 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.294036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:14.294057 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:14.294149 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:14.322566 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.322586 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.322591 1225677 cri.go:89] found id: ""
	I1217 01:33:14.322599 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:14.322693 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.326575 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.330162 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:14.330237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:14.356466 1225677 cri.go:89] found id: ""
	I1217 01:33:14.356491 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.356500 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:14.356506 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:14.356566 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:14.386031 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:14.386055 1225677 cri.go:89] found id: ""
	I1217 01:33:14.386064 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:14.386142 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:14.390030 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:14.390110 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:14.416257 1225677 cri.go:89] found id: ""
	I1217 01:33:14.416284 1225677 logs.go:282] 0 containers: []
	W1217 01:33:14.416293 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:14.416303 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:14.416317 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:14.511192 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:14.511232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:14.604109 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:14.595658    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.596542    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598051    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.598603    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:14.600207    6457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:14.604132 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:14.604148 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:14.656861 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:14.656895 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:14.685614 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:14.685642 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:14.764169 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:14.764208 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:14.812699 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:14.812730 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:14.831513 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:14.831547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:14.858309 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:14.858339 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:14.909041 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:14.909072 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:14.975681 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:14.975723 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.515279 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:17.540730 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:17.540806 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:17.570081 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:17.570102 1225677 cri.go:89] found id: ""
	I1217 01:33:17.570110 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:17.570178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.574399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:17.574471 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:17.599589 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:17.599610 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:17.599614 1225677 cri.go:89] found id: ""
	I1217 01:33:17.599622 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:17.599689 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.604570 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.608574 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:17.608645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:17.635229 1225677 cri.go:89] found id: ""
	I1217 01:33:17.635306 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.635329 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:17.635350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:17.635422 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:17.668964 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:17.669003 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.669009 1225677 cri.go:89] found id: ""
	I1217 01:33:17.669017 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:17.669103 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.673057 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.677753 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:17.677826 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:17.707206 1225677 cri.go:89] found id: ""
	I1217 01:33:17.707245 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.707255 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:17.707261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:17.707325 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:17.740289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:17.740313 1225677 cri.go:89] found id: ""
	I1217 01:33:17.740322 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:17.740385 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:17.744409 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:17.744515 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:17.771770 1225677 cri.go:89] found id: ""
	I1217 01:33:17.771797 1225677 logs.go:282] 0 containers: []
	W1217 01:33:17.771806 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:17.771815 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:17.771828 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:17.800155 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:17.800190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:17.882443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:17.882481 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:17.935750 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:17.935781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:17.954392 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:17.954425 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:18.031535 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:18.022039    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.022705    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.024649    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.025307    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:18.027076    6616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:18.031568 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:18.031585 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:18.079987 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:18.080029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:18.108390 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:18.108454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:18.206148 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:18.206190 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:18.238865 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:18.238894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:18.280200 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:18.280236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.844541 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:20.855183 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:20.855255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:20.883645 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:20.883666 1225677 cri.go:89] found id: ""
	I1217 01:33:20.883673 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:20.883731 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.888021 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:20.888094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:20.917299 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:20.917325 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:20.917330 1225677 cri.go:89] found id: ""
	I1217 01:33:20.917338 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:20.917397 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.921256 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.925997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:20.926069 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:20.952872 1225677 cri.go:89] found id: ""
	I1217 01:33:20.952898 1225677 logs.go:282] 0 containers: []
	W1217 01:33:20.952907 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:20.952913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:20.952970 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:20.979961 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:20.979983 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:20.979989 1225677 cri.go:89] found id: ""
	I1217 01:33:20.979998 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:20.980064 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.984302 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:20.989098 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:20.989171 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:21.023299 1225677 cri.go:89] found id: ""
	I1217 01:33:21.023365 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.023382 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:21.023390 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:21.023454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:21.052742 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.052763 1225677 cri.go:89] found id: ""
	I1217 01:33:21.052773 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:21.052830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:21.056774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:21.056847 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:21.086360 1225677 cri.go:89] found id: ""
	I1217 01:33:21.086382 1225677 logs.go:282] 0 containers: []
	W1217 01:33:21.086391 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:21.086399 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:21.086411 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:21.114471 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:21.114500 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:21.213416 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:21.213451 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:21.294188 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:21.283141    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286161    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.286862    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.288551    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:21.289222    6736 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:21.294212 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:21.294253 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:21.321989 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:21.322022 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:21.361898 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:21.361940 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:21.415113 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:21.415151 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:21.443169 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:21.443202 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:21.538356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:21.538403 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:21.584226 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:21.584255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:21.602588 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:21.602625 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.196991 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:24.207442 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:24.207518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:24.243683 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.243708 1225677 cri.go:89] found id: ""
	I1217 01:33:24.243717 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:24.243772 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.247370 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:24.247444 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:24.274124 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.274153 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.274159 1225677 cri.go:89] found id: ""
	I1217 01:33:24.274167 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:24.274224 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.277936 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.281546 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:24.281628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:24.310864 1225677 cri.go:89] found id: ""
	I1217 01:33:24.310893 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.310903 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:24.310910 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:24.310968 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:24.342620 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.342643 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.342648 1225677 cri.go:89] found id: ""
	I1217 01:33:24.342656 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:24.342714 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.346873 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.350690 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:24.350776 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:24.378447 1225677 cri.go:89] found id: ""
	I1217 01:33:24.378476 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.378486 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:24.378510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:24.378592 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:24.410097 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.410122 1225677 cri.go:89] found id: ""
	I1217 01:33:24.410132 1225677 logs.go:282] 1 containers: [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:24.410193 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:24.414020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:24.414094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:24.440741 1225677 cri.go:89] found id: ""
	I1217 01:33:24.440825 1225677 logs.go:282] 0 containers: []
	W1217 01:33:24.440851 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:24.440879 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:24.440912 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:24.460132 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:24.460163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:24.493812 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:24.493842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:24.536741 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:24.536777 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:24.597219 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:24.597260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:24.663765 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:24.663805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:24.703808 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:24.703840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:24.784250 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:24.784288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:24.883741 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:24.883779 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:24.962818 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:24.951334    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.951909    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.956972    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.957528    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:24.959095    6912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:24.962842 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:24.962856 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:24.994828 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:24.994858 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:27.546732 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:27.564740 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:27.564805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:27.608525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:27.608549 1225677 cri.go:89] found id: ""
	I1217 01:33:27.608558 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:27.608611 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.613062 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:27.613135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:27.659805 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:27.659827 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:27.659831 1225677 cri.go:89] found id: ""
	I1217 01:33:27.659838 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:27.659896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.664210 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.668351 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:27.668446 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:27.704696 1225677 cri.go:89] found id: ""
	I1217 01:33:27.704771 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.704794 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:27.704815 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:27.704898 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:27.738798 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:27.738821 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:27.738827 1225677 cri.go:89] found id: ""
	I1217 01:33:27.738834 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:27.738896 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.743026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.746985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:27.747059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:27.785087 1225677 cri.go:89] found id: ""
	I1217 01:33:27.785111 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.785119 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:27.785126 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:27.785192 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:27.818270 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:27.818289 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:27.818293 1225677 cri.go:89] found id: ""
	I1217 01:33:27.818300 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:27.818356 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.822652 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:27.826638 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:27.826695 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:27.865573 1225677 cri.go:89] found id: ""
	I1217 01:33:27.865604 1225677 logs.go:282] 0 containers: []
	W1217 01:33:27.865613 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:27.865623 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:27.865634 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:27.972193 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:27.972232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:28.056562 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:28.046843    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.047661    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.049456    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051355    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:28.051706    7014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:28.056589 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:28.056605 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:28.085398 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:28.085429 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:28.132214 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:28.132252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:28.174271 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:28.174303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:28.273045 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:28.273082 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:28.321799 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:28.321880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:28.342146 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:28.342292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:28.406933 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:28.407120 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:28.498600 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:28.498680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:28.534124 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:28.534150 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.091052 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:31.103205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:31.103279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:31.140533 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.140556 1225677 cri.go:89] found id: ""
	I1217 01:33:31.140564 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:31.140627 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.145121 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:31.145202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:31.175735 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.175761 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.175768 1225677 cri.go:89] found id: ""
	I1217 01:33:31.175775 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:31.175832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.180026 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.184555 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:31.184628 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:31.213074 1225677 cri.go:89] found id: ""
	I1217 01:33:31.213100 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.213110 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:31.213117 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:31.213174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:31.251260 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.251286 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.251291 1225677 cri.go:89] found id: ""
	I1217 01:33:31.251299 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:31.251354 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.255625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.259649 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:31.259726 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:31.287030 1225677 cri.go:89] found id: ""
	I1217 01:33:31.287056 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.287065 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:31.287072 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:31.287128 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:31.314782 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.314851 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.314876 1225677 cri.go:89] found id: ""
	I1217 01:33:31.314902 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:31.314984 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.320071 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:31.324354 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:31.324534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:31.357412 1225677 cri.go:89] found id: ""
	I1217 01:33:31.357439 1225677 logs.go:282] 0 containers: []
	W1217 01:33:31.357449 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:31.357464 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:31.357480 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:31.462967 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:31.463006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:31.482965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:31.482995 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:31.552928 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:31.543575    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.544211    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.545927    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.546511    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:31.548148    7159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:31.552952 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:31.552966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:31.579435 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:31.579470 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:31.619907 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:31.619945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:31.687595 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:31.687636 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:31.720143 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:31.720175 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:31.746106 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:31.746135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:31.812096 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:31.812131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:31.841610 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:31.841646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:31.920159 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:31.920197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:34.457713 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:34.469492 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:34.469574 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:34.497755 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:34.497777 1225677 cri.go:89] found id: ""
	I1217 01:33:34.497786 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:34.497850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.501620 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:34.501703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:34.532206 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:34.532227 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:34.532231 1225677 cri.go:89] found id: ""
	I1217 01:33:34.532238 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:34.532299 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.537376 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.541069 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:34.541142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:34.577690 1225677 cri.go:89] found id: ""
	I1217 01:33:34.577730 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.577740 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:34.577763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:34.577844 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:34.606156 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.606176 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:34.606180 1225677 cri.go:89] found id: ""
	I1217 01:33:34.606188 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:34.606243 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.610716 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.614894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:34.614990 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:34.644563 1225677 cri.go:89] found id: ""
	I1217 01:33:34.644590 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.644599 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:34.644605 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:34.644685 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:34.673641 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:34.673666 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:34.673671 1225677 cri.go:89] found id: ""
	I1217 01:33:34.673679 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:34.673737 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.677531 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:34.681295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:34.681370 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:34.708990 1225677 cri.go:89] found id: ""
	I1217 01:33:34.709071 1225677 logs.go:282] 0 containers: []
	W1217 01:33:34.709088 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:34.709099 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:34.709111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:34.809701 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:34.809785 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:34.828178 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:34.828210 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:34.903131 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:34.894496    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.895057    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.896615    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.897300    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:34.899141    7313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:34.903155 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:34.903168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:34.971266 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:34.971304 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:35.004179 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:35.004215 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:35.041784 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:35.041815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:35.067541 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:35.067571 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:35.126841 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:35.126874 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:35.172191 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:35.172226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:35.200255 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:35.200295 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:35.239991 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:35.240030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:37.824762 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:37.835623 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:37.835693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:37.865989 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:37.866008 1225677 cri.go:89] found id: ""
	I1217 01:33:37.866018 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:37.866073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.869857 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:37.869946 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:37.898865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:37.898940 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:37.898960 1225677 cri.go:89] found id: ""
	I1217 01:33:37.898986 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:37.899093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.903232 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.907211 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:37.907281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:37.939280 1225677 cri.go:89] found id: ""
	I1217 01:33:37.939302 1225677 logs.go:282] 0 containers: []
	W1217 01:33:37.939311 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:37.939318 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:37.939379 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:37.967924 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:37.967945 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:37.967949 1225677 cri.go:89] found id: ""
	I1217 01:33:37.967957 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:37.968032 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.971797 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:37.975432 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:37.975510 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:38.007766 1225677 cri.go:89] found id: ""
	I1217 01:33:38.007790 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.007798 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:38.007805 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:38.007864 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:38.037473 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.037495 1225677 cri.go:89] found id: "688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.037503 1225677 cri.go:89] found id: ""
	I1217 01:33:38.037511 1225677 logs.go:282] 2 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee]
	I1217 01:33:38.037566 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.041569 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:38.045417 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:38.045524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:38.073829 1225677 cri.go:89] found id: ""
	I1217 01:33:38.073851 1225677 logs.go:282] 0 containers: []
	W1217 01:33:38.073860 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:38.073870 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:38.073882 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:38.093728 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:38.093764 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:38.176670 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:38.167933    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.168725    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.170569    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.171072    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:38.172702    7456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:38.176690 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:38.176703 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:38.211414 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:38.211443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:38.263725 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:38.263761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:38.309151 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:38.309186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:38.338107 1225677 logs.go:123] Gathering logs for kube-controller-manager [688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee] ...
	I1217 01:33:38.338143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 688aaafbe399493423a4d2a5a8a3ada5ed43a9ec11e10fbde5c43e17542f07ee"
	I1217 01:33:38.369538 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:38.369566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:38.449918 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:38.449954 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:38.542249 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:38.542288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:38.612539 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:38.612617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:38.642932 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:38.643015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:41.175028 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:41.186849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:41.186921 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:41.230880 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.230955 1225677 cri.go:89] found id: ""
	I1217 01:33:41.230992 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:41.231084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.235480 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:41.235641 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:41.266906 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.266980 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.267014 1225677 cri.go:89] found id: ""
	I1217 01:33:41.267040 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:41.267127 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.271136 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.275105 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:41.275225 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:41.306499 1225677 cri.go:89] found id: ""
	I1217 01:33:41.306580 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.306603 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:41.306624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:41.306737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:41.333549 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.333575 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.333580 1225677 cri.go:89] found id: ""
	I1217 01:33:41.333589 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:41.333643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.337497 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.341450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:41.341531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:41.368976 1225677 cri.go:89] found id: ""
	I1217 01:33:41.369004 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.369014 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:41.369020 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:41.369082 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:41.397520 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.397583 1225677 cri.go:89] found id: ""
	I1217 01:33:41.397607 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:41.397684 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:41.401528 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:41.401607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:41.427395 1225677 cri.go:89] found id: ""
	I1217 01:33:41.427423 1225677 logs.go:282] 0 containers: []
	W1217 01:33:41.427434 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:41.427444 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:41.427463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:41.525514 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:41.525559 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:41.551264 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:41.551299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:41.625083 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:41.614741    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.615252    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.618432    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.619462    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:41.620085    7607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:41.625123 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:41.625147 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:41.702454 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:41.702490 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:41.735107 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:41.735134 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:41.769228 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:41.769269 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:41.799696 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:41.799725 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:41.848171 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:41.848207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:41.933395 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:41.933446 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:42.025408 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:42.025452 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:44.562646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:44.573393 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:44.573486 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:44.600868 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:44.600895 1225677 cri.go:89] found id: ""
	I1217 01:33:44.600906 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:44.600983 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.604710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:44.604780 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:44.632082 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:44.632158 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:44.632187 1225677 cri.go:89] found id: ""
	I1217 01:33:44.632208 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:44.632294 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.636315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.640212 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:44.640285 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:44.669382 1225677 cri.go:89] found id: ""
	I1217 01:33:44.669404 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.669413 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:44.669419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:44.669480 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:44.699713 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:44.699732 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.699737 1225677 cri.go:89] found id: ""
	I1217 01:33:44.699747 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:44.699801 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.703608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.707118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:44.707191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:44.733881 1225677 cri.go:89] found id: ""
	I1217 01:33:44.733905 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.733914 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:44.733921 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:44.733983 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:44.761418 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:44.761440 1225677 cri.go:89] found id: ""
	I1217 01:33:44.761449 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:44.761507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:44.765368 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:44.765451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:44.797562 1225677 cri.go:89] found id: ""
	I1217 01:33:44.797587 1225677 logs.go:282] 0 containers: []
	W1217 01:33:44.797595 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:44.797605 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:44.797617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:44.824683 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:44.824716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:44.935133 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:44.935177 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:44.954652 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:44.954684 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:45.015678 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:45.015775 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:45.189553 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:45.191524 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:45.273264 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:45.273306 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:45.371974 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:45.372013 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:45.409119 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:45.409149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:45.483606 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:45.474665    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.475383    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477096    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.477693    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:45.479203    7791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:45.483631 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:45.483645 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:45.511796 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:45.511826 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.069605 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:48.081402 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:48.081501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:48.113467 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.113487 1225677 cri.go:89] found id: ""
	I1217 01:33:48.113496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:48.113554 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.123702 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:48.123830 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:48.152225 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.152299 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.152320 1225677 cri.go:89] found id: ""
	I1217 01:33:48.152346 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:48.152452 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.156596 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.160848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:48.160930 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:48.192903 1225677 cri.go:89] found id: ""
	I1217 01:33:48.192934 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.192944 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:48.192951 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:48.193016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:48.223459 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.223483 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.223489 1225677 cri.go:89] found id: ""
	I1217 01:33:48.223496 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:48.223577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.228708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.233033 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:48.233131 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:48.264313 1225677 cri.go:89] found id: ""
	I1217 01:33:48.264339 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.264348 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:48.264355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:48.264430 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:48.292891 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.292963 1225677 cri.go:89] found id: ""
	I1217 01:33:48.292986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:48.293068 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:48.297013 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:48.297089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:48.324697 1225677 cri.go:89] found id: ""
	I1217 01:33:48.324724 1225677 logs.go:282] 0 containers: []
	W1217 01:33:48.324734 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:48.324743 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:48.324755 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:48.343285 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:48.343318 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:48.401079 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:48.401121 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:48.445651 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:48.445685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:48.487906 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:48.487936 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:48.520261 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:48.520288 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:48.612095 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:48.612132 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:48.686505 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:48.676222    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.677564    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.678762    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.679388    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:48.681384    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:48.686528 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:48.686545 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:48.715518 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:48.715549 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:48.780723 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:48.780758 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:48.813883 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:48.813910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.424534 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:51.435019 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:51.435089 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:51.461515 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.461539 1225677 cri.go:89] found id: ""
	I1217 01:33:51.461549 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:51.461610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.465697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:51.465778 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:51.494232 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.494254 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:51.494260 1225677 cri.go:89] found id: ""
	I1217 01:33:51.494267 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:51.494342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.498178 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.501847 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:51.501920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:51.533242 1225677 cri.go:89] found id: ""
	I1217 01:33:51.533267 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.533277 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:51.533283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:51.533356 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:51.559915 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.559937 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:51.559942 1225677 cri.go:89] found id: ""
	I1217 01:33:51.559950 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:51.560017 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.563739 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.567426 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:51.567506 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:51.598933 1225677 cri.go:89] found id: ""
	I1217 01:33:51.598958 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.598978 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:51.598985 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:51.599043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:51.628013 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:51.628085 1225677 cri.go:89] found id: ""
	I1217 01:33:51.628107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:51.628195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:51.632081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:51.632153 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:51.664059 1225677 cri.go:89] found id: ""
	I1217 01:33:51.664095 1225677 logs.go:282] 0 containers: []
	W1217 01:33:51.664104 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:51.664114 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:51.664127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:51.703117 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:51.703141 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:51.746864 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:51.746901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:51.813259 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:51.813294 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:51.890408 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:51.890448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:51.996243 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:51.996281 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:52.078355 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:52.067125    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.068994    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.069537    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071164    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:52.071680    8045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:52.078385 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:52.078399 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:52.124157 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:52.124201 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:52.158325 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:52.158406 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:52.194882 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:52.194917 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:52.236180 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:52.236223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:54.755766 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:54.766584 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:54.766659 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:54.794813 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:54.794834 1225677 cri.go:89] found id: ""
	I1217 01:33:54.794844 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:54.794900 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.798697 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:54.798816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:54.830345 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:54.830368 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:54.830374 1225677 cri.go:89] found id: ""
	I1217 01:33:54.830381 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:54.830437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.834212 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.837869 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:54.837958 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:54.865687 1225677 cri.go:89] found id: ""
	I1217 01:33:54.865710 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.865720 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:54.865726 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:54.865784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:54.893199 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:54.893222 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:54.893228 1225677 cri.go:89] found id: ""
	I1217 01:33:54.893236 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:54.893300 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.897296 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.901035 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:54.901109 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:54.935123 1225677 cri.go:89] found id: ""
	I1217 01:33:54.935150 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.935160 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:54.935165 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:54.935227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:54.960828 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:54.960908 1225677 cri.go:89] found id: ""
	I1217 01:33:54.960925 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:54.960994 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:54.965788 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:54.965858 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:54.996816 1225677 cri.go:89] found id: ""
	I1217 01:33:54.996844 1225677 logs.go:282] 0 containers: []
	W1217 01:33:54.996854 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:54.996864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:54.996877 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:55.049187 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:55.049226 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:55.122184 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:55.122224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:55.149525 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:55.149555 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:55.259828 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:55.259866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:55.286876 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:55.286905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:55.332115 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:55.332149 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:55.359308 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:55.359340 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:55.444861 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:55.444901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:55.492994 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:55.493026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:55.512281 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:55.512312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:55.587576 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:55.578947    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.579657    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581380    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.581874    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:55.583678    8220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.089262 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:33:58.101573 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:33:58.101658 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:33:58.137991 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:33:58.138015 1225677 cri.go:89] found id: ""
	I1217 01:33:58.138024 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:33:58.138084 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.142504 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:33:58.142579 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:33:58.172313 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.172337 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.172343 1225677 cri.go:89] found id: ""
	I1217 01:33:58.172350 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:33:58.172446 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.176396 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.180282 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:33:58.180366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:33:58.211138 1225677 cri.go:89] found id: ""
	I1217 01:33:58.211171 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.211181 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:33:58.211193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:33:58.211257 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:33:58.243736 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.243759 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.243764 1225677 cri.go:89] found id: ""
	I1217 01:33:58.243773 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:33:58.243830 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.247791 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.251576 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:33:58.251655 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:33:58.288139 1225677 cri.go:89] found id: ""
	I1217 01:33:58.288173 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.288184 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:33:58.288193 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:33:58.288255 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:33:58.317667 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.317690 1225677 cri.go:89] found id: ""
	I1217 01:33:58.317700 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:33:58.317763 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:33:58.321820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:33:58.321906 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:33:58.350850 1225677 cri.go:89] found id: ""
	I1217 01:33:58.350878 1225677 logs.go:282] 0 containers: []
	W1217 01:33:58.350888 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:33:58.350897 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:33:58.350910 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:33:58.416830 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:33:58.416867 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:33:58.444837 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:33:58.444868 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:33:58.528215 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:33:58.528263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:33:58.575846 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:33:58.575880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:33:58.595772 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:33:58.595807 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:33:58.650340 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:33:58.650375 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:33:58.701278 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:33:58.701316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:33:58.732779 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:33:58.732810 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:33:58.835274 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:33:58.835310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:33:58.910122 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:33:58.902118    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.902706    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904312    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.904847    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:33:58.906352    8346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:33:58.910207 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:33:58.910236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.438103 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:01.448838 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:01.448920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:01.479627 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:01.479651 1225677 cri.go:89] found id: ""
	I1217 01:34:01.479678 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:01.479736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.483564 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:01.483634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:01.510339 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.510364 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.510370 1225677 cri.go:89] found id: ""
	I1217 01:34:01.510378 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:01.510435 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.514437 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.519025 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:01.519139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:01.547434 1225677 cri.go:89] found id: ""
	I1217 01:34:01.547457 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.547466 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:01.547473 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:01.547530 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:01.574487 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.574508 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.574513 1225677 cri.go:89] found id: ""
	I1217 01:34:01.574520 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:01.574577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.578139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.581545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:01.581626 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:01.609342 1225677 cri.go:89] found id: ""
	I1217 01:34:01.609365 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.609374 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:01.609381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:01.609439 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:01.636506 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:01.636530 1225677 cri.go:89] found id: ""
	I1217 01:34:01.636540 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:01.636602 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:01.640274 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:01.640388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:01.669875 1225677 cri.go:89] found id: ""
	I1217 01:34:01.669944 1225677 logs.go:282] 0 containers: []
	W1217 01:34:01.669969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:01.669993 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:01.670033 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:01.710653 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:01.710691 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:01.763990 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:01.764028 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:01.833068 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:01.833107 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:01.863940 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:01.864023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:01.967213 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:01.967254 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:01.992938 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:01.992972 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:02.024381 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:02.024443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:02.106857 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:02.106896 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:02.143612 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:02.143646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:02.213706 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:02.205223    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.205798    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.207522    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.208190    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:02.209796    8483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:02.213729 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:02.213742 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.741826 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:04.752958 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:04.753026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:04.783743 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:04.783762 1225677 cri.go:89] found id: ""
	I1217 01:34:04.783770 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:04.784150 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.788287 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:04.788359 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:04.817040 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:04.817073 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:04.817079 1225677 cri.go:89] found id: ""
	I1217 01:34:04.817086 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:04.817147 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.821094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.825495 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:04.825571 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:04.853100 1225677 cri.go:89] found id: ""
	I1217 01:34:04.853124 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.853133 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:04.853140 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:04.853202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:04.881403 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:04.881425 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:04.881430 1225677 cri.go:89] found id: ""
	I1217 01:34:04.881438 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:04.881502 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.885516 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.889230 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:04.889353 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:04.915187 1225677 cri.go:89] found id: ""
	I1217 01:34:04.915219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.915229 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:04.915235 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:04.915296 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:04.946769 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:04.946802 1225677 cri.go:89] found id: ""
	I1217 01:34:04.946811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:04.946884 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:04.951231 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:04.951339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:04.978082 1225677 cri.go:89] found id: ""
	I1217 01:34:04.978110 1225677 logs.go:282] 0 containers: []
	W1217 01:34:04.978120 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:04.978128 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:04.978166 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:05.019076 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:05.019109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:05.101083 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:05.101161 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:05.177848 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:05.168695    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.169387    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.171282    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.172061    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:05.173040    8571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:05.177870 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:05.177884 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:05.204143 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:05.204172 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:05.268231 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:05.268268 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:05.297025 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:05.297054 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:05.327881 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:05.327911 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:05.437319 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:05.437360 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:05.456847 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:05.456883 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:05.498209 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:05.498242 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.077748 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:08.088818 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:08.088890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:08.126181 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.126213 1225677 cri.go:89] found id: ""
	I1217 01:34:08.126227 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:08.126292 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.131226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:08.131346 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:08.160808 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.160832 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.160837 1225677 cri.go:89] found id: ""
	I1217 01:34:08.160846 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:08.160923 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.166045 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.170405 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:08.170497 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:08.200928 1225677 cri.go:89] found id: ""
	I1217 01:34:08.200954 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.200964 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:08.200970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:08.201068 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:08.237681 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.237706 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.237711 1225677 cri.go:89] found id: ""
	I1217 01:34:08.237719 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:08.237794 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.241696 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.245486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:08.245561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:08.272543 1225677 cri.go:89] found id: ""
	I1217 01:34:08.272572 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.272582 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:08.272594 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:08.272676 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:08.304603 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.304627 1225677 cri.go:89] found id: ""
	I1217 01:34:08.304635 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:08.304690 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:08.308617 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:08.308691 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:08.338781 1225677 cri.go:89] found id: ""
	I1217 01:34:08.338809 1225677 logs.go:282] 0 containers: []
	W1217 01:34:08.338818 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:08.338827 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:08.338839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:08.374627 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:08.374660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:08.472485 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:08.472523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:08.490991 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:08.491026 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:08.574253 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:08.574292 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:08.602049 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:08.602118 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:08.681328 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:08.672923    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.673628    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675286    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.675933    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:08.677496    8729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:08.681348 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:08.681361 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:08.708974 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:08.709000 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:08.761284 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:08.761320 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:08.819965 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:08.820006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:08.850377 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:08.850405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:11.432699 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:11.444142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:11.444218 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:11.477380 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.477404 1225677 cri.go:89] found id: ""
	I1217 01:34:11.477414 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:11.477475 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.481941 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:11.482014 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:11.510503 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.510529 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.510546 1225677 cri.go:89] found id: ""
	I1217 01:34:11.510554 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:11.510650 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.514842 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.518923 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:11.519013 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:11.546962 1225677 cri.go:89] found id: ""
	I1217 01:34:11.546990 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.547000 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:11.547006 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:11.547080 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:11.574757 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:11.574782 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:11.574787 1225677 cri.go:89] found id: ""
	I1217 01:34:11.574796 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:11.574877 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.579088 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.583273 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:11.583402 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:11.613215 1225677 cri.go:89] found id: ""
	I1217 01:34:11.613244 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.613254 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:11.613261 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:11.613326 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:11.642127 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:11.642166 1225677 cri.go:89] found id: ""
	I1217 01:34:11.642175 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:11.642249 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:11.646180 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:11.646281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:11.676821 1225677 cri.go:89] found id: ""
	I1217 01:34:11.676848 1225677 logs.go:282] 0 containers: []
	W1217 01:34:11.676858 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:11.676868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:11.676880 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:11.776881 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:11.776922 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:11.797665 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:11.797700 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:11.873871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:11.865262    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.866191    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.867801    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.868371    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:11.869967    8840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:11.873895 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:11.873909 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:11.901431 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:11.901461 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:11.946983 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:11.947021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:11.993263 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:11.993299 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:12.069104 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:12.069143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:12.101484 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:12.101511 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:12.137373 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:12.137404 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:12.219779 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:12.219833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:14.749747 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:14.760900 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:14.760971 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:14.789422 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:14.789504 1225677 cri.go:89] found id: ""
	I1217 01:34:14.789520 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:14.789579 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.794016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:14.794094 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:14.820779 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:14.820802 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:14.820808 1225677 cri.go:89] found id: ""
	I1217 01:34:14.820815 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:14.820892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.824759 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.828502 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:14.828620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:14.855015 1225677 cri.go:89] found id: ""
	I1217 01:34:14.855042 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.855051 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:14.855058 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:14.855118 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:14.882554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:14.882580 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:14.882586 1225677 cri.go:89] found id: ""
	I1217 01:34:14.882594 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:14.882649 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.886723 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.890383 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:14.890487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:14.921014 1225677 cri.go:89] found id: ""
	I1217 01:34:14.921051 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.921077 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:14.921096 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:14.921186 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:14.950121 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:14.950151 1225677 cri.go:89] found id: ""
	I1217 01:34:14.950160 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:14.950235 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:14.954391 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:14.954491 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:14.981305 1225677 cri.go:89] found id: ""
	I1217 01:34:14.981381 1225677 logs.go:282] 0 containers: []
	W1217 01:34:14.981396 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:14.981406 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:14.981424 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:15.082515 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:15.082601 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:15.115676 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:15.115766 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:15.207150 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:15.207196 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:15.253067 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:15.253103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:15.282406 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:15.282434 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:15.332186 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:15.332232 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:15.383617 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:15.383653 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:15.413724 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:15.413761 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:15.512500 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:15.512539 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:15.531712 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:15.531744 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:15.607024 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:15.598847    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.599280    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.600984    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.601615    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:15.603227    9033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.107382 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:18.125209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:18.125300 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:18.154715 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.154743 1225677 cri.go:89] found id: ""
	I1217 01:34:18.154759 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:18.154827 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.158989 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:18.159058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:18.186887 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.186906 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.186910 1225677 cri.go:89] found id: ""
	I1217 01:34:18.186918 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:18.186974 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.191114 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.195016 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:18.195088 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:18.230496 1225677 cri.go:89] found id: ""
	I1217 01:34:18.230522 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.230532 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:18.230541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:18.230603 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:18.257433 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.257453 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.257458 1225677 cri.go:89] found id: ""
	I1217 01:34:18.257466 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:18.257522 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.261223 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.264998 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:18.265077 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:18.298281 1225677 cri.go:89] found id: ""
	I1217 01:34:18.298359 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.298373 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:18.298381 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:18.298438 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:18.326008 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:18.326029 1225677 cri.go:89] found id: ""
	I1217 01:34:18.326038 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:18.326094 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:18.329952 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:18.330026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:18.355880 1225677 cri.go:89] found id: ""
	I1217 01:34:18.355914 1225677 logs.go:282] 0 containers: []
	W1217 01:34:18.355924 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:18.355956 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:18.355971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:18.430677 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:18.430716 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:18.461146 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:18.461178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:18.483944 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:18.483976 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:18.558884 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:18.550645    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.551149    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.552949    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.553296    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:18.554728    9120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:18.558914 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:18.558930 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:18.631593 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:18.631631 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:18.661399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:18.661431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:18.765933 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:18.765971 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:18.798005 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:18.798035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:18.838207 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:18.838245 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:18.879939 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:18.879973 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.409362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:21.420285 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:21.420355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:21.450399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:21.450424 1225677 cri.go:89] found id: ""
	I1217 01:34:21.450433 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:21.450488 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.454541 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:21.454613 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:21.484061 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.484086 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:21.484091 1225677 cri.go:89] found id: ""
	I1217 01:34:21.484099 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:21.484156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.488024 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.491648 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:21.491718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:21.522026 1225677 cri.go:89] found id: ""
	I1217 01:34:21.522052 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.522062 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:21.522071 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:21.522139 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:21.554855 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.554887 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:21.554894 1225677 cri.go:89] found id: ""
	I1217 01:34:21.554902 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:21.554955 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.558520 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.562302 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:21.562407 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:21.590541 1225677 cri.go:89] found id: ""
	I1217 01:34:21.590564 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.590574 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:21.590580 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:21.590636 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:21.626269 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:21.626340 1225677 cri.go:89] found id: ""
	I1217 01:34:21.626366 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:21.626428 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:21.630350 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:21.630464 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:21.666471 1225677 cri.go:89] found id: ""
	I1217 01:34:21.666498 1225677 logs.go:282] 0 containers: []
	W1217 01:34:21.666507 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:21.666516 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:21.666533 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:21.706780 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:21.706815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:21.774693 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:21.774729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:21.861669 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:21.861713 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:21.977061 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:21.977096 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:22.003122 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:22.003171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:22.051916 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:22.051957 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:22.082713 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:22.082746 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:22.116010 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:22.116037 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:22.146809 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:22.146848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:22.228639 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:22.221133    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.221572    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.222793    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.223181    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:22.224808    9298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:22.228703 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:22.228732 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.754744 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:24.765436 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:24.765518 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:24.794628 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:24.794658 1225677 cri.go:89] found id: ""
	I1217 01:34:24.794667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:24.794732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.798378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:24.798454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:24.832756 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:24.832781 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:24.832787 1225677 cri.go:89] found id: ""
	I1217 01:34:24.832794 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:24.832850 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.836854 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.840412 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:24.840572 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:24.868168 1225677 cri.go:89] found id: ""
	I1217 01:34:24.868247 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.868270 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:24.868290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:24.868381 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:24.899805 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:24.899825 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:24.899830 1225677 cri.go:89] found id: ""
	I1217 01:34:24.899838 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:24.899893 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.903464 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.906950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:24.907067 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:24.935718 1225677 cri.go:89] found id: ""
	I1217 01:34:24.935744 1225677 logs.go:282] 0 containers: []
	W1217 01:34:24.935753 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:24.935760 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:24.935818 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:24.967779 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:24.967802 1225677 cri.go:89] found id: ""
	I1217 01:34:24.967811 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:24.967863 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:24.971468 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:24.971534 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:25.001724 1225677 cri.go:89] found id: ""
	I1217 01:34:25.001815 1225677 logs.go:282] 0 containers: []
	W1217 01:34:25.001842 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:25.001890 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:25.001925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:25.023512 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:25.023709 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:25.051815 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:25.051848 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:25.099451 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:25.099487 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:25.141801 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:25.141832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:25.178412 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:25.178444 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:25.285631 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:25.285667 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:25.362578 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:25.354308    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.354888    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356642    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.356986    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:25.358625    9416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:25.362602 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:25.362617 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:25.403014 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:25.403050 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:25.510336 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:25.510395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:25.543551 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:25.543582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.129531 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:28.140763 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:28.140832 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:28.184591 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.184616 1225677 cri.go:89] found id: ""
	I1217 01:34:28.184624 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:28.184707 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.188557 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:28.188634 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:28.222629 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.222651 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.222656 1225677 cri.go:89] found id: ""
	I1217 01:34:28.222664 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:28.222724 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.226610 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.230481 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:28.230575 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:28.257099 1225677 cri.go:89] found id: ""
	I1217 01:34:28.257126 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.257135 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:28.257142 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:28.257220 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:28.291310 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:28.291347 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.291354 1225677 cri.go:89] found id: ""
	I1217 01:34:28.291388 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:28.291469 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.295342 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.298970 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:28.299075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:28.329122 1225677 cri.go:89] found id: ""
	I1217 01:34:28.329146 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.329155 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:28.329182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:28.329254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:28.359713 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.359736 1225677 cri.go:89] found id: ""
	I1217 01:34:28.359745 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:28.359803 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:28.363561 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:28.363633 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:28.397883 1225677 cri.go:89] found id: ""
	I1217 01:34:28.397910 1225677 logs.go:282] 0 containers: []
	W1217 01:34:28.397920 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:28.397929 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:28.397941 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:28.431945 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:28.431974 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:28.482268 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:28.482300 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:28.509035 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:28.509067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:28.557586 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:28.557623 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:28.616155 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:28.616203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:28.647557 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:28.647590 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:28.723102 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:28.723139 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:28.830255 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:28.830293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:28.849322 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:28.849355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:28.919883 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:28.911575    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.912396    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914090    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.914441    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:28.915699    9571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:28.919905 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:28.919926 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.492801 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:31.504000 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:31.504075 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:31.539143 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.539163 1225677 cri.go:89] found id: ""
	I1217 01:34:31.539173 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:31.539228 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.543277 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:31.543355 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:31.573251 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:31.573271 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:31.573275 1225677 cri.go:89] found id: ""
	I1217 01:34:31.573284 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:31.573337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.577458 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.581377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:31.581451 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:31.612241 1225677 cri.go:89] found id: ""
	I1217 01:34:31.612270 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.612280 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:31.612286 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:31.612345 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:31.643539 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:31.643563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.643569 1225677 cri.go:89] found id: ""
	I1217 01:34:31.643578 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:31.643638 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.647841 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.651771 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:31.651855 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:31.685384 1225677 cri.go:89] found id: ""
	I1217 01:34:31.685409 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.685418 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:31.685425 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:31.685487 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:31.713458 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.713491 1225677 cri.go:89] found id: ""
	I1217 01:34:31.713501 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:31.713571 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:31.717510 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:31.717598 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:31.742954 1225677 cri.go:89] found id: ""
	I1217 01:34:31.742979 1225677 logs.go:282] 0 containers: []
	W1217 01:34:31.742989 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:31.742998 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:31.743030 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:31.826689 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:31.818371    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.818951    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.820702    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.821364    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:31.822993    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:31.826712 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:31.826726 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:31.858359 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:31.858389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:31.890466 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:31.890494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:31.920394 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:31.920516 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:31.954114 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:31.954143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:32.048397 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:32.048463 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:32.068978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:32.069014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:32.126891 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:32.126931 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:32.194493 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:32.194531 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:32.278811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:32.278854 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:34.866004 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:34.876932 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:34.877040 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:34.904525 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:34.904548 1225677 cri.go:89] found id: ""
	I1217 01:34:34.904556 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:34.904634 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.908290 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:34.908388 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:34.937927 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:34.937962 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:34.937967 1225677 cri.go:89] found id: ""
	I1217 01:34:34.937975 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:34.938053 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.941844 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:34.945447 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:34.945529 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:34.974834 1225677 cri.go:89] found id: ""
	I1217 01:34:34.974860 1225677 logs.go:282] 0 containers: []
	W1217 01:34:34.974870 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:34.974876 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:34.974932 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:35.015100 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.015121 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.015126 1225677 cri.go:89] found id: ""
	I1217 01:34:35.015134 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:35.015196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.019378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.023124 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:35.023202 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:35.055461 1225677 cri.go:89] found id: ""
	I1217 01:34:35.055488 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.055497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:35.055503 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:35.055561 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:35.083009 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.083083 1225677 cri.go:89] found id: ""
	I1217 01:34:35.083107 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:35.083195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:35.087719 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:35.087788 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:35.115588 1225677 cri.go:89] found id: ""
	I1217 01:34:35.115615 1225677 logs.go:282] 0 containers: []
	W1217 01:34:35.115625 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:35.115649 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:35.115664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:35.165942 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:35.165978 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:35.194775 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:35.194803 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:35.291776 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:35.291811 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:35.338079 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:35.338110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:35.357793 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:35.357824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:35.428871 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:35.420822    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.421585    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423304    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.423620    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:35.425092    9828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:35.428893 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:35.428905 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:35.499513 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:35.499548 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:35.540136 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:35.540211 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:35.636873 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:35.636913 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:35.665818 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:35.665889 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.220553 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:38.231749 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:38.231823 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:38.259479 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.259500 1225677 cri.go:89] found id: ""
	I1217 01:34:38.259509 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:38.259568 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.263241 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:38.263385 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:38.295256 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.295292 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.295301 1225677 cri.go:89] found id: ""
	I1217 01:34:38.295310 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:38.295378 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.300468 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.305174 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:38.305294 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:38.339161 1225677 cri.go:89] found id: ""
	I1217 01:34:38.339194 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.339204 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:38.339210 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:38.339275 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:38.367494 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.367518 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:38.367524 1225677 cri.go:89] found id: ""
	I1217 01:34:38.367531 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:38.367608 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.371441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.375084 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:38.375191 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:38.401755 1225677 cri.go:89] found id: ""
	I1217 01:34:38.401784 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.401795 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:38.401801 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:38.401890 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:38.429928 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.429962 1225677 cri.go:89] found id: ""
	I1217 01:34:38.429971 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:38.430044 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:38.433894 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:38.433965 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:38.461088 1225677 cri.go:89] found id: ""
	I1217 01:34:38.461114 1225677 logs.go:282] 0 containers: []
	W1217 01:34:38.461124 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:38.461133 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:38.461144 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:38.544237 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:38.544274 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:38.574281 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:38.574312 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:38.620093 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:38.620131 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:38.674826 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:38.674902 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:38.752562 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:38.752603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:38.781494 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:38.781527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:38.833674 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:38.833706 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:38.933793 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:38.933832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:38.953733 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:38.953782 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:39.029298 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:39.021475    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.022070    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.023581    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.024100    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:39.025579    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:39.029322 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:39.029336 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.557003 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:41.568311 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:41.568412 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:41.601070 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:41.601089 1225677 cri.go:89] found id: ""
	I1217 01:34:41.601097 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:41.601156 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.605150 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:41.605227 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:41.633863 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:41.633887 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:41.633893 1225677 cri.go:89] found id: ""
	I1217 01:34:41.633901 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:41.633958 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.638555 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.644087 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:41.644168 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:41.684237 1225677 cri.go:89] found id: ""
	I1217 01:34:41.684276 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.684287 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:41.684294 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:41.684371 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:41.717925 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:41.717993 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:41.718016 1225677 cri.go:89] found id: ""
	I1217 01:34:41.718032 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:41.718109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.722478 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.726529 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:41.726607 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:41.754525 1225677 cri.go:89] found id: ""
	I1217 01:34:41.754552 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.754562 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:41.754571 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:41.754673 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:41.784794 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:41.784860 1225677 cri.go:89] found id: ""
	I1217 01:34:41.784883 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:41.784969 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:41.788882 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:41.788980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:41.825117 1225677 cri.go:89] found id: ""
	I1217 01:34:41.825193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:41.825216 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:41.825233 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:41.825259 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:41.934154 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:41.934191 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:41.955231 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:41.955263 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:42.023779 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:42.023819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:42.054183 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:42.054218 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:42.146898 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:42.147005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:42.249519 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:42.239173   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.240228   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.241030   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.243116   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:42.244018   10095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:42.249543 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:42.249557 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:42.280803 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:42.280833 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:42.327682 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:42.327731 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:42.373795 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:42.373832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:42.415409 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:42.415437 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:44.951197 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:44.962939 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:44.963016 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:44.996268 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:44.996297 1225677 cri.go:89] found id: ""
	I1217 01:34:44.996306 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:44.996365 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.016281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:45.016367 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:45.152354 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.152375 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.152380 1225677 cri.go:89] found id: ""
	I1217 01:34:45.152389 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:45.152473 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.161519 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.169793 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:45.169869 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:45.269649 1225677 cri.go:89] found id: ""
	I1217 01:34:45.269685 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.269696 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:45.269715 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:45.269816 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:45.322137 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.322210 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:45.322250 1225677 cri.go:89] found id: ""
	I1217 01:34:45.322320 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:45.322406 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.327229 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.331531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:45.331703 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:45.362501 1225677 cri.go:89] found id: ""
	I1217 01:34:45.362571 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.362602 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:45.362624 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:45.362696 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:45.394160 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.394240 1225677 cri.go:89] found id: ""
	I1217 01:34:45.394258 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:45.394335 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:45.398315 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:45.398397 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:45.426737 1225677 cri.go:89] found id: ""
	I1217 01:34:45.426780 1225677 logs.go:282] 0 containers: []
	W1217 01:34:45.426790 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:45.426819 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:45.426839 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:45.503383 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:45.494373   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.495245   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497117   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.497476   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:45.499001   10207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:45.503464 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:45.503485 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:45.535637 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:45.535672 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:45.583362 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:45.583398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:45.613182 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:45.613214 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:45.695579 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:45.695626 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:45.729534 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:45.729563 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:45.826222 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:45.826262 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:45.846157 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:45.846195 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:45.911389 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:45.911426 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:45.983046 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:45.983084 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.519530 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:48.530493 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:48.530565 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:48.560366 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:48.560471 1225677 cri.go:89] found id: ""
	I1217 01:34:48.560496 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:48.560585 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.564848 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:48.564920 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:48.593560 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.593628 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:48.593666 1225677 cri.go:89] found id: ""
	I1217 01:34:48.593696 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:48.593783 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.597895 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.601634 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:48.601718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:48.631022 1225677 cri.go:89] found id: ""
	I1217 01:34:48.631048 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.631057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:48.631064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:48.631122 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:48.656804 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:48.656829 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:48.656834 1225677 cri.go:89] found id: ""
	I1217 01:34:48.656841 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:48.656898 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.660979 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.664698 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:48.664770 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:48.692344 1225677 cri.go:89] found id: ""
	I1217 01:34:48.692372 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.692383 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:48.692389 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:48.692481 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:48.721997 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:48.722020 1225677 cri.go:89] found id: ""
	I1217 01:34:48.722029 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:48.722111 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:48.726120 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:48.726247 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:48.753313 1225677 cri.go:89] found id: ""
	I1217 01:34:48.753339 1225677 logs.go:282] 0 containers: []
	W1217 01:34:48.753349 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:48.753358 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:48.753388 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:48.849435 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:48.849474 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:48.870486 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:48.870523 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:48.943874 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:48.935893   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.936611   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938107   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.938659   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:48.940182   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:48.943904 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:48.943919 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:48.991171 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:48.991205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:49.020622 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:49.020649 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:49.064904 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:49.064942 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:49.143148 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:49.143186 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:49.174999 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:49.175086 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:49.209127 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:49.209156 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:49.296275 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:49.296325 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:51.840412 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:51.851134 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:51.851204 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:51.880791 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:51.880811 1225677 cri.go:89] found id: ""
	I1217 01:34:51.880820 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:51.880879 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.884883 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:51.884962 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:51.911511 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:51.911535 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:51.911541 1225677 cri.go:89] found id: ""
	I1217 01:34:51.911549 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:51.911607 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.915352 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.918918 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:51.918986 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:51.950127 1225677 cri.go:89] found id: ""
	I1217 01:34:51.950152 1225677 logs.go:282] 0 containers: []
	W1217 01:34:51.950163 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:51.950169 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:51.950266 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:51.978696 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:51.978725 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:51.978731 1225677 cri.go:89] found id: ""
	I1217 01:34:51.978738 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:51.978795 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.982736 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:51.986411 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:51.986482 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:52.016886 1225677 cri.go:89] found id: ""
	I1217 01:34:52.016911 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.016920 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:52.016926 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:52.016989 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:52.045870 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.045895 1225677 cri.go:89] found id: ""
	I1217 01:34:52.045904 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:52.045962 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:52.049906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:52.049977 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:52.077565 1225677 cri.go:89] found id: ""
	I1217 01:34:52.077592 1225677 logs.go:282] 0 containers: []
	W1217 01:34:52.077604 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:52.077614 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:52.077646 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:52.105176 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:52.105205 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:52.211964 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:52.211999 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:52.252350 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:52.252382 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:52.306053 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:52.306088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:52.376262 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:52.376302 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:52.403480 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:52.403508 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:52.431952 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:52.431983 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:52.510953 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:52.510990 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:52.555450 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:52.555482 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:52.574086 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:52.574119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:52.644412 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:52.635327   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.636070   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.637737   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.638072   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:52.639953   10550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.144646 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:55.155615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:55.155693 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:55.184697 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.184716 1225677 cri.go:89] found id: ""
	I1217 01:34:55.184724 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:55.184781 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.188462 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:55.188538 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:55.217937 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.217961 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.217966 1225677 cri.go:89] found id: ""
	I1217 01:34:55.217974 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:55.218030 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.221924 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.226643 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:55.226714 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:55.254617 1225677 cri.go:89] found id: ""
	I1217 01:34:55.254645 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.254655 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:55.254662 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:55.254721 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:55.282393 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.282419 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.282424 1225677 cri.go:89] found id: ""
	I1217 01:34:55.282432 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:55.282485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.286357 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.289912 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:55.289992 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:55.316252 1225677 cri.go:89] found id: ""
	I1217 01:34:55.316278 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.316288 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:55.316295 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:55.316368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:55.343249 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.343314 1225677 cri.go:89] found id: ""
	I1217 01:34:55.343337 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:55.343433 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:55.347319 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:55.347448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:55.381545 1225677 cri.go:89] found id: ""
	I1217 01:34:55.381629 1225677 logs.go:282] 0 containers: []
	W1217 01:34:55.381645 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:55.381656 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:55.381669 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:55.421981 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:55.422014 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:55.453301 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:55.453342 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:55.480646 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:55.480687 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:55.570826 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:55.561906   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.562626   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.564518   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.565337   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:55.567151   10645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:55.570849 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:55.570863 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:55.599216 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:55.599257 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:55.658218 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:55.658310 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:55.745919 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:55.745955 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:55.838064 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:55.838101 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:55.888374 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:55.888405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:55.996293 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:55.996331 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:58.522397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:34:58.536202 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:34:58.536271 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:34:58.566870 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:58.566965 1225677 cri.go:89] found id: ""
	I1217 01:34:58.566994 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:34:58.567139 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.571283 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:34:58.571363 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:34:58.598180 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:34:58.598208 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.598213 1225677 cri.go:89] found id: ""
	I1217 01:34:58.598222 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:34:58.598297 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.602201 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.605913 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:34:58.605997 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:34:58.636167 1225677 cri.go:89] found id: ""
	I1217 01:34:58.636193 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.636202 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:34:58.636209 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:34:58.636270 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:34:58.662111 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:58.662135 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:58.662140 1225677 cri.go:89] found id: ""
	I1217 01:34:58.662148 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:34:58.662209 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.666315 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.670253 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:34:58.670348 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:34:58.696144 1225677 cri.go:89] found id: ""
	I1217 01:34:58.696219 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.696244 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:34:58.696265 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:34:58.696347 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:34:58.726742 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.726767 1225677 cri.go:89] found id: ""
	I1217 01:34:58.726776 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:34:58.726832 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:34:58.730710 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:34:58.730785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:34:58.759394 1225677 cri.go:89] found id: ""
	I1217 01:34:58.759421 1225677 logs.go:282] 0 containers: []
	W1217 01:34:58.759431 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:34:58.759440 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:34:58.759454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:34:58.817531 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:34:58.817569 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:34:58.847360 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:34:58.847389 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:34:58.929741 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:34:58.929776 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:34:58.968951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:34:58.968982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:34:59.043218 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:34:59.034606   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.035184   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.036933   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.037521   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:34:59.039351   10791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:34:59.043239 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:34:59.043255 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:34:59.070405 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:34:59.070431 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:34:59.146784 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:34:59.146829 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:34:59.179445 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:34:59.179479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:34:59.286441 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:34:59.286479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:34:59.308412 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:34:59.308540 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.850397 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:01.863234 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:01.863368 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:01.898442 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:01.898473 1225677 cri.go:89] found id: ""
	I1217 01:35:01.898484 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:01.898577 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.903064 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:01.903142 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:01.936524 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:01.936547 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:01.936551 1225677 cri.go:89] found id: ""
	I1217 01:35:01.936559 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:01.936625 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.942865 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:01.947963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:01.948071 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:01.979359 1225677 cri.go:89] found id: ""
	I1217 01:35:01.979384 1225677 logs.go:282] 0 containers: []
	W1217 01:35:01.979393 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:01.979399 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:01.979466 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:02.012882 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.012925 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.012931 1225677 cri.go:89] found id: ""
	I1217 01:35:02.012975 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:02.013055 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.017605 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.021797 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:02.021870 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:02.049550 1225677 cri.go:89] found id: ""
	I1217 01:35:02.049621 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.049638 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:02.049646 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:02.049722 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:02.081301 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.081326 1225677 cri.go:89] found id: ""
	I1217 01:35:02.081335 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:02.081392 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:02.086118 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:02.086210 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:02.125352 1225677 cri.go:89] found id: ""
	I1217 01:35:02.125374 1225677 logs.go:282] 0 containers: []
	W1217 01:35:02.125383 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:02.125393 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:02.125405 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:02.197255 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:02.188608   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.189649   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191304   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.191801   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:02.193297   10899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:02.197318 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:02.197355 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:02.226446 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:02.226488 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:02.271257 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:02.271293 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:02.314955 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:02.314988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:02.386430 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:02.386468 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:02.417607 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:02.417682 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:02.449011 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:02.449041 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:02.551859 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:02.551899 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:02.571928 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:02.571960 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:02.659356 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:02.659395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:05.190765 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:05.203695 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:05.203771 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:05.238686 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.238707 1225677 cri.go:89] found id: ""
	I1217 01:35:05.238716 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:05.238778 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.242613 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:05.242687 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:05.272627 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.272661 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.272667 1225677 cri.go:89] found id: ""
	I1217 01:35:05.272675 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:05.272757 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.277184 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.281337 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:05.281414 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:05.309340 1225677 cri.go:89] found id: ""
	I1217 01:35:05.309361 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.309370 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:05.309377 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:05.309437 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:05.342268 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.342294 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.342300 1225677 cri.go:89] found id: ""
	I1217 01:35:05.342308 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:05.342394 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.346668 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.350724 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:05.350805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:05.378257 1225677 cri.go:89] found id: ""
	I1217 01:35:05.378289 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.378298 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:05.378305 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:05.378366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:05.406348 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.406370 1225677 cri.go:89] found id: ""
	I1217 01:35:05.406379 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:05.406455 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:05.410653 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:05.410724 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:05.441777 1225677 cri.go:89] found id: ""
	I1217 01:35:05.441802 1225677 logs.go:282] 0 containers: []
	W1217 01:35:05.441812 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:05.441820 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:05.441832 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:05.521081 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:05.512031   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.513725   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.514332   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.515303   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:05.516036   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:05.521113 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:05.521127 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:05.559491 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:05.559525 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:05.608690 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:05.608727 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:05.640635 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:05.640666 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:05.720771 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:05.720808 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:05.824388 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:05.824427 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:05.864839 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:05.864871 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:05.960476 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:05.960520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:05.992555 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:05.992588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:06.045891 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:06.045925 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:08.568611 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:08.579598 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:08.579681 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:08.607399 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.607421 1225677 cri.go:89] found id: ""
	I1217 01:35:08.607430 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:08.607485 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.611906 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:08.611982 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:08.638447 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.638470 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:08.638476 1225677 cri.go:89] found id: ""
	I1217 01:35:08.638484 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:08.638558 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.642337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.646066 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:08.646162 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:08.673000 1225677 cri.go:89] found id: ""
	I1217 01:35:08.673026 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.673036 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:08.673042 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:08.673135 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:08.701768 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:08.701792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:08.701798 1225677 cri.go:89] found id: ""
	I1217 01:35:08.701806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:08.701892 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.705733 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.709545 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:08.709620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:08.736283 1225677 cri.go:89] found id: ""
	I1217 01:35:08.736309 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.736319 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:08.736325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:08.736383 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:08.763589 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:08.763610 1225677 cri.go:89] found id: ""
	I1217 01:35:08.763618 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:08.763679 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:08.768008 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:08.768157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:08.794921 1225677 cri.go:89] found id: ""
	I1217 01:35:08.794948 1225677 logs.go:282] 0 containers: []
	W1217 01:35:08.794957 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:08.794967 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:08.795003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:08.866335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:08.858583   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.859217   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.860643   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.861108   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:08.862542   11174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:08.866356 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:08.866371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:08.894862 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:08.894894 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:08.945712 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:08.945749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:09.030175 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:09.030213 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:09.057626 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:09.057656 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:09.140070 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:09.140109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:09.249646 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:09.249685 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:09.269874 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:09.269906 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:09.317090 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:09.317126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:09.346482 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:09.346513 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:11.877651 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:11.889575 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:11.889645 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:11.917211 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:11.917234 1225677 cri.go:89] found id: ""
	I1217 01:35:11.917243 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:11.917309 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.921144 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:11.921223 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:11.955516 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:11.955536 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:11.955541 1225677 cri.go:89] found id: ""
	I1217 01:35:11.955548 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:11.955604 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.959308 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:11.962862 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:11.962933 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:11.991261 1225677 cri.go:89] found id: ""
	I1217 01:35:11.991284 1225677 logs.go:282] 0 containers: []
	W1217 01:35:11.991293 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:11.991299 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:11.991366 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:12.023452 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.023477 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.023483 1225677 cri.go:89] found id: ""
	I1217 01:35:12.023491 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:12.023581 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.027715 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.031641 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:12.031751 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:12.059135 1225677 cri.go:89] found id: ""
	I1217 01:35:12.059211 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.059234 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:12.059255 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:12.059343 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:12.092809 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.092830 1225677 cri.go:89] found id: ""
	I1217 01:35:12.092839 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:12.092915 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:12.096814 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:12.096963 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:12.132911 1225677 cri.go:89] found id: ""
	I1217 01:35:12.132936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:12.132946 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:12.132955 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:12.132966 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:12.235310 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:12.235346 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:12.255554 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:12.255587 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:12.303522 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:12.303560 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:12.374998 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:12.375032 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:12.461333 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:12.461371 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:12.547450 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:12.534766   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.535390   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541133   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.541862   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:12.543644   11333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:12.547475 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:12.547489 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:12.574864 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:12.574892 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:12.619775 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:12.619816 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:12.649040 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:12.649123 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:12.677296 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:12.677326 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.212228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:15.225138 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:15.225215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:15.259192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.259218 1225677 cri.go:89] found id: ""
	I1217 01:35:15.259228 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:15.259287 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.263205 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:15.263279 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:15.290493 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.290516 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.290521 1225677 cri.go:89] found id: ""
	I1217 01:35:15.290529 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:15.290588 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.294490 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.298107 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:15.298208 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:15.325021 1225677 cri.go:89] found id: ""
	I1217 01:35:15.325047 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.325057 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:15.325063 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:15.325125 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:15.353712 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.353744 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.353750 1225677 cri.go:89] found id: ""
	I1217 01:35:15.353758 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:15.353828 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.357883 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.361729 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:15.361817 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:15.389342 1225677 cri.go:89] found id: ""
	I1217 01:35:15.389370 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.389379 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:15.389386 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:15.389449 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:15.418437 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.418470 1225677 cri.go:89] found id: ""
	I1217 01:35:15.418479 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:15.418553 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:15.422466 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:15.422548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:15.449297 1225677 cri.go:89] found id: ""
	I1217 01:35:15.449333 1225677 logs.go:282] 0 containers: []
	W1217 01:35:15.449343 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:15.449370 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:15.449394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:15.468355 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:15.468385 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:15.494969 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:15.495005 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:15.543170 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:15.543209 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:15.616803 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:15.608876   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.609532   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611122   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.611725   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:15.613279   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:15.616829 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:15.616845 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:15.659996 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:15.660031 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:15.730995 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:15.731034 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:15.758963 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:15.758994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:15.785562 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:15.785633 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:15.872457 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:15.872494 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:15.904808 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:15.904838 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:18.506161 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:18.518520 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:18.518589 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:18.550949 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.550972 1225677 cri.go:89] found id: ""
	I1217 01:35:18.550982 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:18.551041 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.554800 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:18.554880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:18.582497 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:18.582522 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.582527 1225677 cri.go:89] found id: ""
	I1217 01:35:18.582535 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:18.582594 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.586831 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.590486 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:18.590560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:18.617401 1225677 cri.go:89] found id: ""
	I1217 01:35:18.617426 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.617436 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:18.617443 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:18.617504 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:18.648400 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:18.648458 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.648464 1225677 cri.go:89] found id: ""
	I1217 01:35:18.648472 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:18.648530 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.652380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.655820 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:18.655916 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:18.689519 1225677 cri.go:89] found id: ""
	I1217 01:35:18.689544 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.689553 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:18.689560 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:18.689621 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:18.718284 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:18.718306 1225677 cri.go:89] found id: ""
	I1217 01:35:18.718313 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:18.718368 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:18.722268 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:18.722372 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:18.753514 1225677 cri.go:89] found id: ""
	I1217 01:35:18.753542 1225677 logs.go:282] 0 containers: []
	W1217 01:35:18.753558 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:18.753567 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:18.753611 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:18.771813 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:18.771842 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:18.845441 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:18.836863   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.837515   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839200   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.839769   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:18.841399   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:18.845463 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:18.845477 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:18.872553 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:18.872582 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:18.922099 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:18.922176 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:18.950258 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:18.950285 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:18.990211 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:18.990241 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:19.031127 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:19.031164 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:19.107071 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:19.107109 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:19.138299 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:19.138327 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:19.222624 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:19.222660 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:21.834640 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:21.845711 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:21.845784 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:21.895249 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:21.895280 1225677 cri.go:89] found id: ""
	I1217 01:35:21.895292 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:21.895371 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.902322 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:21.902404 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:21.943815 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:21.943857 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:21.943863 1225677 cri.go:89] found id: ""
	I1217 01:35:21.943877 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:21.943963 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.949206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:21.954547 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:21.954640 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:21.988594 1225677 cri.go:89] found id: ""
	I1217 01:35:21.988620 1225677 logs.go:282] 0 containers: []
	W1217 01:35:21.988630 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:21.988636 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:21.988718 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:22.024625 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.024646 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.024651 1225677 cri.go:89] found id: ""
	I1217 01:35:22.024660 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:22.024760 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.029143 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.033935 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:22.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:22.067922 1225677 cri.go:89] found id: ""
	I1217 01:35:22.067946 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.067955 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:22.067961 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:22.068020 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:22.097619 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.097641 1225677 cri.go:89] found id: ""
	I1217 01:35:22.097649 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:22.097706 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:22.101692 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:22.101766 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:22.136868 1225677 cri.go:89] found id: ""
	I1217 01:35:22.136891 1225677 logs.go:282] 0 containers: []
	W1217 01:35:22.136900 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:22.136911 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:22.136923 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:22.164209 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:22.164236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:22.208399 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:22.208512 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:22.256618 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:22.256650 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:22.287201 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:22.287237 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:22.314443 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:22.314472 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:22.346752 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:22.346780 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:22.445530 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:22.445567 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:22.464378 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:22.464409 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:22.554715 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:22.554749 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:22.659061 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:22.659103 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:22.731143 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:22.723383   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.723983   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725407   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.725898   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:22.727518   11787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.231455 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:25.242812 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:25.242949 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:25.280443 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.280470 1225677 cri.go:89] found id: ""
	I1217 01:35:25.280478 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:25.280536 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.284885 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:25.285008 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:25.313823 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.313846 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.313852 1225677 cri.go:89] found id: ""
	I1217 01:35:25.313859 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:25.313939 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.317952 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.321539 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:25.321620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:25.354565 1225677 cri.go:89] found id: ""
	I1217 01:35:25.354632 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.354656 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:25.354681 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:25.354777 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:25.386743 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.386774 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.386779 1225677 cri.go:89] found id: ""
	I1217 01:35:25.386787 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:25.386857 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.390671 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.394226 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:25.394339 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:25.421123 1225677 cri.go:89] found id: ""
	I1217 01:35:25.421212 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.421228 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:25.421236 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:25.421310 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:25.448879 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.448904 1225677 cri.go:89] found id: ""
	I1217 01:35:25.448913 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:25.448971 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:25.452707 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:25.452782 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:25.479351 1225677 cri.go:89] found id: ""
	I1217 01:35:25.479379 1225677 logs.go:282] 0 containers: []
	W1217 01:35:25.479389 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:25.479399 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:25.479410 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:25.577317 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:25.577354 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:25.600156 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:25.600203 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:25.679524 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:25.671380   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.672033   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.673585   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.674007   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:25.675472   11863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:25.679600 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:25.679621 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:25.706792 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:25.706824 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:25.764895 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:25.764934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:25.796158 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:25.796188 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:25.823684 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:25.823721 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:25.857273 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:25.857303 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:25.915963 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:25.916003 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:25.992485 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:25.992520 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:28.577965 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:28.588733 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:28.588802 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:28.621192 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.621211 1225677 cri.go:89] found id: ""
	I1217 01:35:28.621220 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:28.621279 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.625055 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:28.625124 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:28.651718 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:28.651738 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.651742 1225677 cri.go:89] found id: ""
	I1217 01:35:28.651749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:28.651807 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.656353 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.660550 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:28.660620 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:28.688556 1225677 cri.go:89] found id: ""
	I1217 01:35:28.688580 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.688589 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:28.688596 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:28.688654 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:28.716478 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:28.716503 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:28.716508 1225677 cri.go:89] found id: ""
	I1217 01:35:28.716516 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:28.716603 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.720442 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.723785 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:28.723862 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:28.750780 1225677 cri.go:89] found id: ""
	I1217 01:35:28.750807 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.750817 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:28.750823 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:28.750882 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:28.777746 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:28.777772 1225677 cri.go:89] found id: ""
	I1217 01:35:28.777781 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:28.777836 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:28.781586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:28.781707 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:28.812032 1225677 cri.go:89] found id: ""
	I1217 01:35:28.812062 1225677 logs.go:282] 0 containers: []
	W1217 01:35:28.812072 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:28.812081 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:28.812115 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:28.910028 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:28.910067 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:28.938533 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:28.938565 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:28.982530 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:28.982566 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:29.059912 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:29.059948 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:29.087417 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:29.087449 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:29.141591 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:29.141622 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:29.162662 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:29.162694 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:29.245511 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:29.237861   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.238371   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.239908   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.240309   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:29.241742   12048 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:29.245537 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:29.245553 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:29.286747 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:29.286784 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:29.317045 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:29.317075 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:31.896935 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:31.908531 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:31.908605 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:31.951663 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:31.951684 1225677 cri.go:89] found id: ""
	I1217 01:35:31.951692 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:31.951746 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.956325 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:31.956501 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:31.990512 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:31.990578 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:31.990598 1225677 cri.go:89] found id: ""
	I1217 01:35:31.990625 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:31.990708 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:31.994957 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.001450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:32.001597 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:32.033107 1225677 cri.go:89] found id: ""
	I1217 01:35:32.033136 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.033146 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:32.033153 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:32.033245 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:32.061118 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.061140 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.061145 1225677 cri.go:89] found id: ""
	I1217 01:35:32.061153 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:32.061208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.065195 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.068963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:32.069066 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:32.099914 1225677 cri.go:89] found id: ""
	I1217 01:35:32.099941 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.099951 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:32.099957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:32.100018 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:32.134003 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.134028 1225677 cri.go:89] found id: ""
	I1217 01:35:32.134044 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:32.134101 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:32.138837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:32.138909 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:32.178095 1225677 cri.go:89] found id: ""
	I1217 01:35:32.178168 1225677 logs.go:282] 0 containers: []
	W1217 01:35:32.178193 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:32.178210 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:32.178223 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:32.219018 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:32.219049 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:32.328076 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:32.328182 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:32.347854 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:32.347887 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:32.389069 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:32.389143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:32.464016 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:32.464052 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:32.492348 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:32.492466 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:32.519965 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:32.520035 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:32.589420 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:32.581444   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.582022   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.583610   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.584050   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:32.585561   12183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:32.589485 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:32.589506 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:32.615780 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:32.615814 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:32.668491 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:32.668527 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.253556 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:35.266266 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:35.266344 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:35.303632 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.303658 1225677 cri.go:89] found id: ""
	I1217 01:35:35.303667 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:35.303726 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.307439 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:35.307511 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:35.336107 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.336131 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.336136 1225677 cri.go:89] found id: ""
	I1217 01:35:35.336143 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:35.336196 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.340106 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.343587 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:35.343667 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:35.374453 1225677 cri.go:89] found id: ""
	I1217 01:35:35.374483 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.374492 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:35.374498 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:35.374560 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:35.401769 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.401792 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.401798 1225677 cri.go:89] found id: ""
	I1217 01:35:35.401806 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:35.401860 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.405507 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.409182 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:35.409254 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:35.437191 1225677 cri.go:89] found id: ""
	I1217 01:35:35.437229 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.437280 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:35.437303 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:35.437454 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:35.464026 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.464048 1225677 cri.go:89] found id: ""
	I1217 01:35:35.464056 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:35.464113 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:35.467752 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:35.467854 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:35.495119 1225677 cri.go:89] found id: ""
	I1217 01:35:35.495143 1225677 logs.go:282] 0 containers: []
	W1217 01:35:35.495152 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:35.495161 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:35.495173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:35.538118 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:35.538157 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:35.612361 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:35.612398 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:35.642424 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:35.642454 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:35.671140 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:35.671168 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:35.753840 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:35.753879 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:35.791176 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:35.791207 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:35.861567 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:35.852465   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.853202   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855002   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.855644   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:35.857683   12318 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:35.861588 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:35.861604 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:35.887544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:35.887573 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:35.930868 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:35.930901 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:36.035955 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:36.035997 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.556940 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:38.568341 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:38.568410 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:38.602139 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:38.602163 1225677 cri.go:89] found id: ""
	I1217 01:35:38.602172 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:38.602234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.606168 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:38.606244 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:38.636762 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:38.636782 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:38.636787 1225677 cri.go:89] found id: ""
	I1217 01:35:38.636795 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:38.636849 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.640703 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.644870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:38.644980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:38.672028 1225677 cri.go:89] found id: ""
	I1217 01:35:38.672105 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.672130 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:38.672152 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:38.672252 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:38.702063 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:38.702088 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:38.702096 1225677 cri.go:89] found id: ""
	I1217 01:35:38.702104 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:38.702189 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.706075 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.710843 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:38.710923 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:38.739176 1225677 cri.go:89] found id: ""
	I1217 01:35:38.739204 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.739214 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:38.739221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:38.739281 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:38.765721 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:38.765749 1225677 cri.go:89] found id: ""
	I1217 01:35:38.765759 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:38.765835 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:38.769950 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:38.770026 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:38.797985 1225677 cri.go:89] found id: ""
	I1217 01:35:38.798013 1225677 logs.go:282] 0 containers: []
	W1217 01:35:38.798023 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:38.798033 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:38.798065 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:38.898407 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:38.898448 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:38.917886 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:38.917920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:38.999335 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:38.989331   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.990022   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.991883   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.993144   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:38.994842   12415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:38.999368 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:38.999384 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:39.041692 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:39.041729 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:39.089675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:39.089712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:39.172952 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:39.172988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:39.211704 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:39.211736 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:39.241891 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:39.241920 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:39.276958 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:39.276988 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:39.364067 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:39.364119 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:41.897002 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:41.908024 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:41.908100 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:41.937482 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:41.937556 1225677 cri.go:89] found id: ""
	I1217 01:35:41.937569 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:41.937630 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.941542 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:41.941611 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:41.987116 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:41.987139 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:41.987145 1225677 cri.go:89] found id: ""
	I1217 01:35:41.987153 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:41.987206 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.991091 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:41.994831 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:41.994905 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:42.033990 1225677 cri.go:89] found id: ""
	I1217 01:35:42.034016 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.034025 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:42.034031 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:42.034096 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:42.065878 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:42.065959 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.065980 1225677 cri.go:89] found id: ""
	I1217 01:35:42.066005 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:42.066122 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.071367 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.076378 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:42.076531 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:42.123414 1225677 cri.go:89] found id: ""
	I1217 01:35:42.123521 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.123583 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:42.123610 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:42.123706 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:42.163210 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.163302 1225677 cri.go:89] found id: ""
	I1217 01:35:42.163328 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:42.163431 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:42.168650 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:42.168758 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:42.211741 1225677 cri.go:89] found id: ""
	I1217 01:35:42.211767 1225677 logs.go:282] 0 containers: []
	W1217 01:35:42.211777 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:42.211787 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:42.211800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:42.252091 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:42.252126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:42.356409 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:42.356465 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:42.377129 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:42.377163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:42.449855 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:42.441594   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.442422   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.443492   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.444230   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:42.446007   12564 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:42.449879 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:42.449893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:42.476498 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:42.476530 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:42.518303 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:42.518337 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:42.548819 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:42.548852 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:42.578811 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:42.578840 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:42.658356 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:42.658395 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:42.700126 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:42.700173 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.276979 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:45.301570 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:45.301737 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:45.339316 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:45.339342 1225677 cri.go:89] found id: ""
	I1217 01:35:45.339351 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:45.339441 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.343543 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:45.343652 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:45.374479 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.374552 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.374574 1225677 cri.go:89] found id: ""
	I1217 01:35:45.374600 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:45.374672 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.378901 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.382870 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:45.382942 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:45.413785 1225677 cri.go:89] found id: ""
	I1217 01:35:45.413816 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.413825 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:45.413832 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:45.413894 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:45.446395 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.446417 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.446423 1225677 cri.go:89] found id: ""
	I1217 01:35:45.446431 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:45.446508 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.450414 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.454372 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:45.454448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:45.483846 1225677 cri.go:89] found id: ""
	I1217 01:35:45.483918 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.483942 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:45.483963 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:45.484039 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:45.515890 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.515962 1225677 cri.go:89] found id: ""
	I1217 01:35:45.515986 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:45.516060 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:45.519980 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:45.520107 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:45.548900 1225677 cri.go:89] found id: ""
	I1217 01:35:45.548984 1225677 logs.go:282] 0 containers: []
	W1217 01:35:45.549001 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:45.549011 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:45.549023 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:45.594641 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:45.594680 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:45.623072 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:45.623171 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:45.701558 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:45.701599 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:45.775358 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:45.767620   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.768080   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.769776   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.770218   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:45.771986   12704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:45.775423 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:45.775443 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:45.822675 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:45.822712 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:45.904212 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:45.904249 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:45.934553 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:45.934581 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:45.966200 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:45.966231 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:46.073612 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:46.073651 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:46.092826 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:46.092860 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.626362 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:48.637081 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:48.637157 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:48.663951 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:48.664018 1225677 cri.go:89] found id: ""
	I1217 01:35:48.664045 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:48.664137 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.667889 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:48.668007 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:48.695424 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:48.695498 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:48.695518 1225677 cri.go:89] found id: ""
	I1217 01:35:48.695570 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:48.695667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.699980 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.703779 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:48.703875 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:48.731347 1225677 cri.go:89] found id: ""
	I1217 01:35:48.731372 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.731381 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:48.731388 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:48.731448 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:48.761776 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:48.761802 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:48.761808 1225677 cri.go:89] found id: ""
	I1217 01:35:48.761816 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:48.761875 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.766072 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.769796 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:48.769871 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:48.799377 1225677 cri.go:89] found id: ""
	I1217 01:35:48.799404 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.799412 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:48.799418 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:48.799477 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:48.828149 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:48.828173 1225677 cri.go:89] found id: ""
	I1217 01:35:48.828192 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:48.828254 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:48.832599 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:48.832717 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:48.858554 1225677 cri.go:89] found id: ""
	I1217 01:35:48.858587 1225677 logs.go:282] 0 containers: []
	W1217 01:35:48.858597 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:48.858626 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:48.858643 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:48.894472 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:48.894502 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:48.969952 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:48.962440   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.963041   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.964606   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.965057   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:48.966120   12834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:48.969978 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:48.969994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:49.014023 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:49.014058 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:49.092630 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:49.092671 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:49.197053 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:49.197088 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:49.225929 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:49.225963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:49.253145 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:49.253174 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:49.301391 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:49.301428 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:49.337786 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:49.337819 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:49.367000 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:49.367029 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:51.942903 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:51.957586 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:51.957662 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:52.007996 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.008017 1225677 cri.go:89] found id: ""
	I1217 01:35:52.008026 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:52.008082 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.015080 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:52.015148 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:52.052213 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.052249 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.052255 1225677 cri.go:89] found id: ""
	I1217 01:35:52.052262 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:52.052318 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.056182 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.059959 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:52.060033 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:52.090239 1225677 cri.go:89] found id: ""
	I1217 01:35:52.090264 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.090274 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:52.090281 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:52.090341 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:52.118854 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:52.118874 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.118879 1225677 cri.go:89] found id: ""
	I1217 01:35:52.118886 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:52.118946 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.125093 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.128837 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:52.128931 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:52.157907 1225677 cri.go:89] found id: ""
	I1217 01:35:52.157936 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.157945 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:52.157957 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:52.158017 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:52.191428 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.191451 1225677 cri.go:89] found id: ""
	I1217 01:35:52.191459 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:52.191543 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:52.195375 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:52.195456 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:52.224407 1225677 cri.go:89] found id: ""
	I1217 01:35:52.224468 1225677 logs.go:282] 0 containers: []
	W1217 01:35:52.224477 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:52.224486 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:52.224498 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:52.252950 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:52.252981 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:52.279228 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:52.279258 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:52.298974 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:52.299007 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:52.370510 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:52.362023   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.362549   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364239   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.364895   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:52.366488   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:52.370544 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:52.370588 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:52.418893 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:52.418934 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:52.499956 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:52.499992 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:52.542158 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:52.542187 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:52.643325 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:52.643367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:52.671238 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:52.671267 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:52.712214 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:52.712252 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.294635 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:55.305795 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:55.305897 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:55.341120 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.341143 1225677 cri.go:89] found id: ""
	I1217 01:35:55.341152 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:55.341208 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.345154 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:55.345236 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:55.376865 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.376937 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.376959 1225677 cri.go:89] found id: ""
	I1217 01:35:55.376982 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:55.377065 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.381380 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.385355 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:55.385472 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:55.412679 1225677 cri.go:89] found id: ""
	I1217 01:35:55.412701 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.412710 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:55.412716 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:55.412773 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:55.439554 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:55.439573 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.439578 1225677 cri.go:89] found id: ""
	I1217 01:35:55.439585 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:55.439639 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.443337 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.446737 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:55.446804 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:55.478015 1225677 cri.go:89] found id: ""
	I1217 01:35:55.478039 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.478052 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:55.478065 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:55.478136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:55.503877 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:55.503940 1225677 cri.go:89] found id: ""
	I1217 01:35:55.503964 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:55.504038 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:55.507809 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:55.507880 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:55.539899 1225677 cri.go:89] found id: ""
	I1217 01:35:55.539926 1225677 logs.go:282] 0 containers: []
	W1217 01:35:55.539935 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:55.539951 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:55.539963 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:55.642073 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:55.642111 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:55.662102 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:55.662143 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:55.689162 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:55.689192 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:55.728771 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:55.728804 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:55.755851 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:55.755878 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:55.839759 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:55.839805 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:55.910162 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:55.901852   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.902719   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904401   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.904929   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:55.906481   13132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:55.910183 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:55.910197 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:55.962626 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:55.962664 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:56.057075 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:56.057126 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:56.095037 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:56.095069 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:35:58.632280 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:35:58.643092 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:35:58.643199 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:35:58.670245 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:58.670268 1225677 cri.go:89] found id: ""
	I1217 01:35:58.670277 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:35:58.670332 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.673988 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:35:58.674059 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:35:58.706113 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:58.706135 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:58.706140 1225677 cri.go:89] found id: ""
	I1217 01:35:58.706148 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:35:58.706234 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.710732 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.714631 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:35:58.714747 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:35:58.742956 1225677 cri.go:89] found id: ""
	I1217 01:35:58.742982 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.742991 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:35:58.742997 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:35:58.743058 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:35:58.774022 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:58.774044 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:58.774050 1225677 cri.go:89] found id: ""
	I1217 01:35:58.774058 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:35:58.774112 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.778073 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.781607 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:35:58.781686 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:35:58.808679 1225677 cri.go:89] found id: ""
	I1217 01:35:58.808703 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.808719 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:35:58.808725 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:35:58.808785 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:35:58.835922 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:58.835942 1225677 cri.go:89] found id: ""
	I1217 01:35:58.835951 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:35:58.836007 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:35:58.839615 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:35:58.839689 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:35:58.866788 1225677 cri.go:89] found id: ""
	I1217 01:35:58.866813 1225677 logs.go:282] 0 containers: []
	W1217 01:35:58.866823 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:35:58.866833 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:35:58.866866 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:35:58.968702 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:35:58.968738 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:35:58.989939 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:35:58.989967 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:35:59.058020 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:35:59.048838   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.049664   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.051442   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.052054   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:35:59.053653   13242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:35:59.058046 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:35:59.058059 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:35:59.088364 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:35:59.088394 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:35:59.141100 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:35:59.141135 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:35:59.232851 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:35:59.232891 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:35:59.262771 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:35:59.262800 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:35:59.290187 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:35:59.290224 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:35:59.339890 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:35:59.339924 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:35:59.422198 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:35:59.422236 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:01.956538 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:01.967590 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:01.967660 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:02.007538 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.007575 1225677 cri.go:89] found id: ""
	I1217 01:36:02.007584 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:02.007670 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.012001 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:02.012136 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:02.046710 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.046735 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.046741 1225677 cri.go:89] found id: ""
	I1217 01:36:02.046749 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:02.046804 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.050667 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.054450 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:02.054546 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:02.081851 1225677 cri.go:89] found id: ""
	I1217 01:36:02.081880 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.081890 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:02.081897 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:02.081980 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:02.112077 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.112101 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.112106 1225677 cri.go:89] found id: ""
	I1217 01:36:02.112114 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:02.112169 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.116263 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.121396 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:02.121492 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:02.152376 1225677 cri.go:89] found id: ""
	I1217 01:36:02.152404 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.152497 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:02.152523 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:02.152642 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:02.187133 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.187159 1225677 cri.go:89] found id: ""
	I1217 01:36:02.187168 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:02.187247 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:02.191078 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:02.191173 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:02.220566 1225677 cri.go:89] found id: ""
	I1217 01:36:02.220593 1225677 logs.go:282] 0 containers: []
	W1217 01:36:02.220602 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:02.220611 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:02.220659 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:02.253992 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:02.254021 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:02.304043 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:02.304077 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:02.350981 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:02.351020 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:02.431358 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:02.431393 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:02.458269 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:02.458298 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:02.561780 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:02.561820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:02.582487 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:02.582522 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:02.663558 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:02.654353   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.655106   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.656823   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.657888   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:02.658855   13421 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:02.663583 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:02.663596 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:02.700536 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:02.700568 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:02.775505 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:02.775547 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.310734 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:05.322909 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:05.322985 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:05.350653 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.350738 1225677 cri.go:89] found id: ""
	I1217 01:36:05.350762 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:05.350819 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.355346 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:05.355461 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:05.385411 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:05.385439 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.385445 1225677 cri.go:89] found id: ""
	I1217 01:36:05.385453 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:05.385511 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.389761 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.393387 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:05.393463 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:05.420412 1225677 cri.go:89] found id: ""
	I1217 01:36:05.420495 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.420505 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:05.420511 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:05.420569 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:05.452034 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:05.452060 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.452066 1225677 cri.go:89] found id: ""
	I1217 01:36:05.452075 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:05.452131 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.456205 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.460128 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:05.460221 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:05.486956 1225677 cri.go:89] found id: ""
	I1217 01:36:05.486986 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.486995 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:05.487002 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:05.487063 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:05.518138 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.518160 1225677 cri.go:89] found id: ""
	I1217 01:36:05.518169 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:05.518227 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:05.522038 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:05.522112 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:05.552883 1225677 cri.go:89] found id: ""
	I1217 01:36:05.552951 1225677 logs.go:282] 0 containers: []
	W1217 01:36:05.552969 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:05.552980 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:05.552994 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:05.580975 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:05.581006 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:05.677135 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:05.677178 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:05.697133 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:05.697163 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:05.725150 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:05.725181 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:05.768358 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:05.768396 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:05.794846 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:05.794876 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:05.871841 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:05.871921 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:05.905951 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:05.905982 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:05.976460 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:05.968089   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.968647   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970391   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.970766   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:05.972412   13563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:05.976482 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:05.976495 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:06.030179 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:06.030260 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.614353 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:08.625446 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:08.625527 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:08.652272 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.652300 1225677 cri.go:89] found id: ""
	I1217 01:36:08.652309 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:08.652372 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.656164 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:08.656237 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:08.682167 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.682186 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:08.682190 1225677 cri.go:89] found id: ""
	I1217 01:36:08.682198 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:08.682258 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.686632 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.690338 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:08.690409 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:08.717708 1225677 cri.go:89] found id: ""
	I1217 01:36:08.717732 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.717741 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:08.717748 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:08.717805 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:08.754193 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:08.754217 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:08.754222 1225677 cri.go:89] found id: ""
	I1217 01:36:08.754229 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:08.754285 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.758295 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.761917 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:08.762011 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:08.793723 1225677 cri.go:89] found id: ""
	I1217 01:36:08.793750 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.793761 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:08.793774 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:08.793833 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:08.820995 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:08.821018 1225677 cri.go:89] found id: ""
	I1217 01:36:08.821027 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:08.821109 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:08.824969 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:08.825043 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:08.850861 1225677 cri.go:89] found id: ""
	I1217 01:36:08.850896 1225677 logs.go:282] 0 containers: []
	W1217 01:36:08.850906 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:08.850917 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:08.850929 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:08.927540 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:08.918340   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.919268   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.920969   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.921407   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:08.923920   13648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:08.927562 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:08.927576 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:08.953082 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:08.953110 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:08.994744 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:08.994781 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:09.027277 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:09.027305 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:09.056339 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:09.056367 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:09.129785 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:09.129820 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:09.161526 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:09.161607 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:09.261869 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:09.261908 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:09.282618 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:09.282652 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:09.328912 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:09.328949 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:11.909228 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:11.920145 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:36:11.920215 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:36:11.953558 1225677 cri.go:89] found id: "7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:11.953581 1225677 cri.go:89] found id: ""
	I1217 01:36:11.953589 1225677 logs.go:282] 1 containers: [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016]
	I1217 01:36:11.953643 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.957221 1225677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:36:11.957293 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:36:11.984240 1225677 cri.go:89] found id: "3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:11.984263 1225677 cri.go:89] found id: "76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:11.984268 1225677 cri.go:89] found id: ""
	I1217 01:36:11.984276 1225677 logs.go:282] 2 containers: [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605 76c761bac8d58749266028778374519515d72756f346717589c385602e378081]
	I1217 01:36:11.984336 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.987996 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:11.991849 1225677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:36:11.991924 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:36:12.022066 1225677 cri.go:89] found id: ""
	I1217 01:36:12.022096 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.022106 1225677 logs.go:284] No container was found matching "coredns"
	I1217 01:36:12.022113 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:36:12.022174 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:36:12.058540 1225677 cri.go:89] found id: "f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.058563 1225677 cri.go:89] found id: "d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.058569 1225677 cri.go:89] found id: ""
	I1217 01:36:12.058577 1225677 logs.go:282] 2 containers: [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c]
	I1217 01:36:12.058629 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.063379 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.067419 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:36:12.067548 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:36:12.095872 1225677 cri.go:89] found id: ""
	I1217 01:36:12.095900 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.095922 1225677 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:36:12.095929 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:36:12.095998 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:36:12.134836 1225677 cri.go:89] found id: "cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.134910 1225677 cri.go:89] found id: ""
	I1217 01:36:12.134933 1225677 logs.go:282] 1 containers: [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b]
	I1217 01:36:12.135022 1225677 ssh_runner.go:195] Run: which crictl
	I1217 01:36:12.139454 1225677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:36:12.139524 1225677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:36:12.178455 1225677 cri.go:89] found id: ""
	I1217 01:36:12.178481 1225677 logs.go:282] 0 containers: []
	W1217 01:36:12.178491 1225677 logs.go:284] No container was found matching "kindnet"
	I1217 01:36:12.178500 1225677 logs.go:123] Gathering logs for kube-scheduler [d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c] ...
	I1217 01:36:12.178538 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4def8288653f87a3ffc8decf64f02583d1eccbecb9a024fcbfcf068df54f55c"
	I1217 01:36:12.215176 1225677 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:36:12.215204 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:36:12.304978 1225677 logs.go:123] Gathering logs for container status ...
	I1217 01:36:12.305015 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:36:12.342716 1225677 logs.go:123] Gathering logs for kubelet ...
	I1217 01:36:12.342745 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:36:12.444908 1225677 logs.go:123] Gathering logs for dmesg ...
	I1217 01:36:12.444945 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:36:12.463288 1225677 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:36:12.463316 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:36:12.536568 1225677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 01:36:12.527059   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.527891   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529222   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.529938   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 01:36:12.531609   13811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:36:12.536589 1225677 logs.go:123] Gathering logs for etcd [3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605] ...
	I1217 01:36:12.536603 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3c10094c382d6e75f907af872318a755240e45dbefefcb23dbe18b90cb8b1605"
	I1217 01:36:12.576446 1225677 logs.go:123] Gathering logs for kube-scheduler [f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a] ...
	I1217 01:36:12.576479 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f5ad9a814009da6173154d0a88b6a3102b15c8c6341c4030ce9ad39f24b16c2a"
	I1217 01:36:12.652969 1225677 logs.go:123] Gathering logs for kube-controller-manager [cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b] ...
	I1217 01:36:12.653004 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cbb0400b9e9d9e8b7ffbcc6714b0b934c2178f1c30a5feda12475b651b67556b"
	I1217 01:36:12.684862 1225677 logs.go:123] Gathering logs for kube-apiserver [7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016] ...
	I1217 01:36:12.684893 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7f7316bf25b32ca6032390367dccc42c3bcdd7b0205b6a8c91c332290fe43016"
	I1217 01:36:12.713785 1225677 logs.go:123] Gathering logs for etcd [76c761bac8d58749266028778374519515d72756f346717589c385602e378081] ...
	I1217 01:36:12.713815 1225677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 76c761bac8d58749266028778374519515d72756f346717589c385602e378081"
	I1217 01:36:15.267669 1225677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:36:15.282407 1225677 out.go:203] 
	W1217 01:36:15.285472 1225677 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 01:36:15.285518 1225677 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 01:36:15.285531 1225677 out.go:285] * Related issues:
	W1217 01:36:15.285545 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 01:36:15.285561 1225677 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 01:36:15.288521 1225677 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.00263192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.018401147Z" level=info msg="Created container 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62: kube-system/storage-provisioner/storage-provisioner" id=1949dc31-1f1c-4b50-a2e1-37b3fdbf1dae name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.019096564Z" level=info msg="Starting container: 69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62" id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:30:53 ha-202151 crio[664]: time="2025-12-17T01:30:53.02762405Z" level=info msg="Started container" PID=1465 containerID=69c29e5195bd539ab5bcc1f376c114c5397bc943bd006eceaeac6599ed877d62 description=kube-system/storage-provisioner/storage-provisioner id=e58fe881-9b97-46e9-9d85-1de293b077af name=/runtime.v1.RuntimeService/StartContainer sandboxID=201ec2eb9e7bac96947c26eb05eaeb60a6c9cb562fc7abd5b112bcffc3034df6
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.942366958Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946089951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.9461257Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.946150479Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949691184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.94972877Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.949750136Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953024484Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953060389Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.953083707Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956843738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 01:31:01 ha-202151 crio[664]: time="2025-12-17T01:31:01.956882473Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.984628463Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=d06134a9-f254-4735-8afd-66ee773b0add name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.986619446Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=64030ed7-d453-4dae-a62d-31943ce0a699 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988074458Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:09 ha-202151 crio[664]: time="2025-12-17T01:31:09.988182542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.010661643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.011529823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.034308469Z" level=info msg="Created container bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee: kube-system/kube-controller-manager-ha-202151/kube-controller-manager" id=1e6bae73-da7a-45ac-85cc-194d800914f7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.036802709Z" level=info msg="Starting container: bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee" id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 01:31:10 ha-202151 crio[664]: time="2025-12-17T01:31:10.042056225Z" level=info msg="Started container" PID=1514 containerID=bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee description=kube-system/kube-controller-manager-ha-202151/kube-controller-manager id=dd2a9c1b-19fe-4afb-ab62-d39f3d1eea3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5021c181f938b38114a133bf254586f8ff5e1e22eea40c87bb44019760307250
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	bbbccca1f1945       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Running             kube-controller-manager   7                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	69c29e5195bd5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       7                   201ec2eb9e7ba       storage-provisioner                 kube-system
	3345ee69cef2f       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   7 minutes ago       Exited              kube-controller-manager   6                   5021c181f938b       kube-controller-manager-ha-202151   kube-system
	e2674511b7c44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   8 minutes ago       Exited              storage-provisioner       6                   201ec2eb9e7ba       storage-provisioner                 kube-system
	5b41f976d94aa       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   7991c76c60a45       coredns-66bc5c9577-km6lq            kube-system
	f78b81e996c76       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   9 minutes ago       Running             busybox                   2                   b40c6af808cd2       busybox-7b57f96db7-hw4rm            default
	4f3ffacfcf52c       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   9 minutes ago       Running             kube-proxy                2                   db6cac339dafd       kube-proxy-5gdc5                    kube-system
	cc242e356e74c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   416ecd7d82605       coredns-66bc5c9577-4s6qf            kube-system
	421b902e0a04a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 minutes ago       Running             kindnet-cni               2                   0059b57d997fb       kindnet-7b5wx                       kube-system
	9deff052e5328       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   9 minutes ago       Running             etcd                      2                   cdd6d86a58561       etcd-ha-202151                      kube-system
	b08781420f13d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   9 minutes ago       Running             kube-apiserver            3                   55c73e3aeca0b       kube-apiserver-ha-202151            kube-system
	d2d094f7ce12d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   9 minutes ago       Running             kube-scheduler            2                   9fa81adaf2298       kube-scheduler-ha-202151            kube-system
	f70584959dd02       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Running             kube-vip                  2                   5cb308ab59abd       kube-vip-ha-202151                  kube-system
	
	
	==> coredns [5b41f976d94aab2a66d015407415d4106cf8778628764f4904a5062779241af6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cc242e356e74c1c82ae80013999351dff6fb19a83d4a91a90cd125e034418779] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-202151
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T01_12_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:12:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:12:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:37:12 +0000   Wed, 17 Dec 2025 01:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-202151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                7edb1e1f-1b17-415f-9229-48ba3527eefe
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hw4rm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-66bc5c9577-4s6qf             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     24m
	  kube-system                 coredns-66bc5c9577-km6lq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     24m
	  kube-system                 etcd-ha-202151                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         24m
	  kube-system                 kindnet-7b5wx                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24m
	  kube-system                 kube-apiserver-ha-202151             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-202151    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-5gdc5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-202151             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-202151                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m14s                  kube-proxy       
	  Normal   Starting                 24m                    kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    25m (x8 over 25m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  25m (x8 over 25m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     25m (x8 over 25m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     24m                    kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   Starting                 24m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 24m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  24m                    kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m                    kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           24m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           24m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeReady                24m                    kubelet          Node ha-202151 status is now: NodeReady
	  Normal   RegisteredNode           23m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   Starting                 9m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m26s (x8 over 9m27s)  kubelet          Node ha-202151 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m26s (x8 over 9m27s)  kubelet          Node ha-202151 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m26s (x8 over 9m27s)  kubelet          Node ha-202151 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m44s                  node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	  Normal   RegisteredNode           55s                    node-controller  Node ha-202151 event: Registered Node ha-202151 in Controller
	
	
	Name:               ha-202151-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_13_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:13:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:26:36 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-202151-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                04eb29d0-5ea5-46d1-ae46-afe3ee374602
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rz794                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-202151-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         24m
	  kube-system                 kindnet-nt6qx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24m
	  kube-system                 kube-apiserver-ha-202151-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-202151-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-hp525                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-202151-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-202151-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 24m                kube-proxy       
	  Normal   RegisteredNode           24m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           24m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             19m                node-controller  Node ha-202151-m02 status is now: NodeNotReady
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-202151-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-202151-m02 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   RegisteredNode           6m44s              node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	  Normal   NodeNotReady             5m54s              node-controller  Node ha-202151-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           55s                node-controller  Node ha-202151-m02 event: Registered Node ha-202151-m02 in Controller
	
	
	Name:               ha-202151-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_16_12_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:16:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:27:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Dec 2025 01:27:19 +0000   Wed, 17 Dec 2025 01:32:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-202151-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                84c842f9-c3a2-4245-b176-e32c4cbe3e2c
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2d7p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kindnet-cntp7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-proxy-kqgdw            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 21m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    21m (x3 over 21m)  kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     21m                cidrAllocator    Node ha-202151-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeHasSufficientPID     21m (x3 over 21m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m (x3 over 21m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           21m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeReady                21m                kubelet          Node ha-202151-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-202151-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-202151-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   RegisteredNode           6m44s              node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	  Normal   NodeNotReady             5m54s              node-controller  Node ha-202151-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           55s                node-controller  Node ha-202151-m04 event: Registered Node ha-202151-m04 in Controller
	
	
	Name:               ha-202151-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-202151-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=ha-202151
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_17T01_37_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:37:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-202151-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:37:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:37:47 +0000   Wed, 17 Dec 2025 01:37:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-202151-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                d903183d-46dc-44c6-9b30-b71d4e86967d
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-202151-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         52s
	  kube-system                 kindnet-rcbrp                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-ha-202151-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-ha-202151-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-proxy-52s97                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-ha-202151-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-vip-ha-202151-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        49s   kube-proxy       
	  Normal  RegisteredNode  54s   node-controller  Node ha-202151-m05 event: Registered Node ha-202151-m05 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node ha-202151-m05 event: Registered Node ha-202151-m05 in Controller
	
	
	==> dmesg <==
	[Dec17 00:16] overlayfs: idmapped layers are currently not supported
	[Dec17 00:18] overlayfs: idmapped layers are currently not supported
	[Dec17 00:20] overlayfs: idmapped layers are currently not supported
	[Dec17 00:21] overlayfs: idmapped layers are currently not supported
	[Dec17 00:23] overlayfs: idmapped layers are currently not supported
	[Dec17 00:25] overlayfs: idmapped layers are currently not supported
	[Dec17 00:26] overlayfs: idmapped layers are currently not supported
	[Dec17 00:28] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 00:29] overlayfs: idmapped layers are currently not supported
	[Dec17 00:35] overlayfs: idmapped layers are currently not supported
	[Dec17 00:36] overlayfs: idmapped layers are currently not supported
	[Dec17 00:55] overlayfs: idmapped layers are currently not supported
	[Dec17 01:12] overlayfs: idmapped layers are currently not supported
	[Dec17 01:13] overlayfs: idmapped layers are currently not supported
	[Dec17 01:14] overlayfs: idmapped layers are currently not supported
	[Dec17 01:16] overlayfs: idmapped layers are currently not supported
	[Dec17 01:17] overlayfs: idmapped layers are currently not supported
	[Dec17 01:19] kauditd_printk_skb: 8 callbacks suppressed
	[Dec17 01:26] overlayfs: idmapped layers are currently not supported
	[  +3.428919] overlayfs: idmapped layers are currently not supported
	[ +34.914517] overlayfs: idmapped layers are currently not supported
	[Dec17 01:27] overlayfs: idmapped layers are currently not supported
	[Dec17 01:28] overlayfs: idmapped layers are currently not supported
	[  +3.208371] overlayfs: idmapped layers are currently not supported
	[Dec17 01:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9deff052e5328d9739983ebbe09b8d088a4ab83cb24c0b39624eba4a1c231c3c] <==
	{"level":"info","ts":"2025-12-17T01:36:49.760205Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:49.802740Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-17T01:36:49.802848Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"error","ts":"2025-12-17T01:36:49.817323Z","caller":"etcdserver/server.go:1601","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1601\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1542\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1514\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1466\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.(*ClusterServer).MemberPromote\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/member.go:101\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler.func1\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7432\ngo.etcd.io/etcd/server/v3/etcdserv
er/api/v3rpc.Server.(*ServerMetrics).UnaryServerInterceptor.UnaryServerInterceptor.func12\n\tgithub.com/grpc-ecosystem/go-grpc-middleware/v2@v2.1.0/interceptors/server.go:22\ngoogle.golang.org/grpc.getChainUnaryHandler.func1.getChainUnaryHandler.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newUnaryInterceptor.func5\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:74\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newLogUnaryInterceptor.func4\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:81\ngoogle.golang.org/grpc.NewServer.chainUnaryServerInterceptors.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1208\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler\n\tgo.etcd.io/etcd/api/v3@v3.6.5/etcdserverpb/rpc.pb.go:7434\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoo
gle.golang.org/grpc@v1.71.1/server.go:1405\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1815\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1035"}
	{"level":"info","ts":"2025-12-17T01:36:49.962251Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":4980736,"size":"5.0 MB"}
	{"level":"info","ts":"2025-12-17T01:36:50.216154Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":5111,"remote-peer-id":"2e3bda924e1ae8ff","bytes":4990308,"size":"5.0 MB"}
	{"level":"info","ts":"2025-12-17T01:36:50.331837Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(3331496671281080575 4778087298962311874 12593026477526642892)"}
	{"level":"info","ts":"2025-12-17T01:36:50.332000Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.332048Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"2e3bda924e1ae8ff"}
	{"level":"warn","ts":"2025-12-17T01:36:50.491718Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:36:50.492625Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:36:50.793120Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2e3bda924e1ae8ff","error":"failed to write 2e3bda924e1ae8ff on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:44814: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-17T01:36:50.793284Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.910244Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.910296Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:50.997714Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"warn","ts":"2025-12-17T01:36:51.112334Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2e3bda924e1ae8ff","error":"failed to write 2e3bda924e1ae8ff on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:44800: write: connection reset by peer)"}
	{"level":"warn","ts":"2025-12-17T01:36:51.112463Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:51.165824Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:51.166420Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-17T01:36:51.166484Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:36:51.180136Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-17T01:36:51.180242Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2e3bda924e1ae8ff"}
	{"level":"info","ts":"2025-12-17T01:37:03.787083Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-17T01:37:20.217246Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"2e3bda924e1ae8ff","bytes":4990308,"size":"5.0 MB","took":"30.583062521s"}
	
	
	==> kernel <==
	 01:37:57 up  7:20,  0 user,  load average: 1.32, 1.44, 1.54
	Linux ha-202151 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [421b902e0a04a8b9de33dba40eff9de2915e948b549831a023a55f14ab43a351] <==
	I1217 01:37:21.944788       1 main.go:324] Node ha-202151-m05 has CIDR [10.244.2.0/24] 
	I1217 01:37:31.943099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:37:31.943138       1 main.go:301] handling current node
	I1217 01:37:31.943156       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:37:31.943162       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:37:31.943311       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:37:31.943326       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:37:31.943382       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1217 01:37:31.943387       1 main.go:324] Node ha-202151-m05 has CIDR [10.244.2.0/24] 
	I1217 01:37:41.945351       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:37:41.945459       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:37:41.945625       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:37:41.945666       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	I1217 01:37:41.945764       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1217 01:37:41.945816       1 main.go:324] Node ha-202151-m05 has CIDR [10.244.2.0/24] 
	I1217 01:37:41.945910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:37:41.946016       1 main.go:301] handling current node
	I1217 01:37:51.941622       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1217 01:37:51.941660       1 main.go:324] Node ha-202151-m05 has CIDR [10.244.2.0/24] 
	I1217 01:37:51.941858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 01:37:51.941874       1 main.go:301] handling current node
	I1217 01:37:51.941887       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1217 01:37:51.941892       1 main.go:324] Node ha-202151-m02 has CIDR [10.244.1.0/24] 
	I1217 01:37:51.942017       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1217 01:37:51.942030       1 main.go:324] Node ha-202151-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b08781420f13d5f9a5c60c47da2597e3c2664650213f3202a67a2947b35fda43] <==
	{"level":"warn","ts":"2025-12-17T01:30:14.097955Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a885a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098017Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002e254a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098226Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098431Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001c61680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098550Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a21e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098649Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002813860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098715Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203c1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.098771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021443c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100260Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002913860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100450Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002114960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001752b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.100771Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002912d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.101157Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a9c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-17T01:30:14.108687Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103c3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1217 01:30:14.109232       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1217 01:30:14.109341       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111281       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.111377       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1217 01:30:14.112738       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.651626ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	{"level":"warn","ts":"2025-12-17T01:30:14.178037Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000eec000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I1217 01:30:20.949098       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1217 01:30:43.911399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1217 01:31:13.533495       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 01:32:03.642642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 01:32:03.692026       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317] <==
	I1217 01:30:11.991091       1 serving.go:386] Generated self-signed cert in-memory
	I1217 01:30:13.217832       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1217 01:30:13.217864       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:13.219443       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1217 01:30:13.219569       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1217 01:30:13.220274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1217 01:30:13.220329       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 01:30:24.189762       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [bbbccca1f194516c9b586e958acab6307ce66e18975339453d4aaf6a19b8c2ee] <==
	E1217 01:31:53.506340       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506373       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506405       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	E1217 01:31:53.506437       1 gc_controller.go:151] "Failed to get node" err="node \"ha-202151-m03\" not found" logger="pod-garbage-collector-controller" node="ha-202151-m03"
	I1217 01:31:53.524733       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.571989       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-202151-m03"
	I1217 01:31:53.572097       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.606958       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-202151-m03"
	I1217 01:31:53.607067       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646154       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-97bs4"
	I1217 01:31:53.646268       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695195       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-202151-m03"
	I1217 01:31:53.695310       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742527       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-202151-m03"
	I1217 01:31:53.742634       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785957       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-202151-m03"
	I1217 01:31:53.785994       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:31:53.833471       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gghqw"
	I1217 01:32:03.448660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-202151-m04"
	E1217 01:37:02.791748       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-99fnt failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-99fnt\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1217 01:37:03.587675       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-202151-m05\" does not exist"
	I1217 01:37:03.614684       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-202151-m05" podCIDRs=["10.244.2.0/24"]
	I1217 01:37:03.753344       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-202151-m05"
	I1217 01:37:03.753819       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1217 01:37:48.762452       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [4f3ffacfcf52c27d4a48be1c9762e97d9c8b2f9eff204b9108c451da8b2defab] <==
	E1217 01:28:51.112803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:58.124554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:10.248785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:29:26.153294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:30:07.912871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-202151&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1217 01:30:42.899769       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 01:30:42.899808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 01:30:42.899895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 01:30:42.921440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 01:30:42.921510       1 server_linux.go:132] "Using iptables Proxier"
	I1217 01:30:42.927648       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 01:30:42.928009       1 server.go:527] "Version info" version="v1.34.2"
	I1217 01:30:42.928034       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:30:42.931509       1 config.go:106] "Starting endpoint slice config controller"
	I1217 01:30:42.931589       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 01:30:42.931909       1 config.go:200] "Starting service config controller"
	I1217 01:30:42.931953       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 01:30:42.932968       1 config.go:309] "Starting node config controller"
	I1217 01:30:42.932995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 01:30:42.933003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 01:30:42.933332       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 01:30:42.933352       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 01:30:43.031859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 01:30:43.032046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 01:30:43.033393       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2d094f7ce12da087865fa37bae5d6a14c0fc52d350f8fe80666dc2eb43ff52e] <==
	E1217 01:28:38.924937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:38.925147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:38.925091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:38.925212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:38.925293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:39.827962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 01:28:39.828496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 01:28:39.945026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 01:28:39.947443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 01:28:40.059965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 01:28:40.060779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 01:28:40.088703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 01:28:40.109776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 01:28:40.129468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 01:28:40.134968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 01:28:40.195130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 01:28:40.254624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 01:28:40.281191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 01:28:40.314175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 01:28:40.347761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 01:28:40.381360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 01:28:40.463231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 01:28:40.490812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 01:28:40.517370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1217 01:28:41.991837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 01:29:56 ha-202151 kubelet[802]: I1217 01:29:56.984304     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:29:56 ha-202151 kubelet[802]: E1217 01:29:56.984531     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:01 ha-202151 kubelet[802]: E1217 01:30:01.439578     802 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-202151)" interval="400ms"
	Dec 17 01:30:02 ha-202151 kubelet[802]: E1217 01:30:02.001281     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:10 ha-202151 kubelet[802]: I1217 01:30:10.983522     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:11 ha-202151 kubelet[802]: E1217 01:30:11.841503     802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms"
	Dec 17 01:30:12 ha-202151 kubelet[802]: E1217 01:30:12.002934     802 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-202151\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-202151?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.438401     802 scope.go:117] "RemoveContainer" containerID="76e0da7e8e73be03b7ffa5f1a30d2f604cae3239a9c3bfb644c2bef08d5017c9"
	Dec 17 01:30:24 ha-202151 kubelet[802]: I1217 01:30:24.439109     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:24 ha-202151 kubelet[802]: E1217 01:30:24.439355     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.449813     802 scope.go:117] "RemoveContainer" containerID="61c769055e2e33178655adbc6de856c58722cb4c70738c4d94a535d730bf75c6"
	Dec 17 01:30:27 ha-202151 kubelet[802]: I1217 01:30:27.450264     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:27 ha-202151 kubelet[802]: E1217 01:30:27.450420     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:29 ha-202151 kubelet[802]: I1217 01:30:29.966353     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:29 ha-202151 kubelet[802]: E1217 01:30:29.966538     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:34 ha-202151 kubelet[802]: I1217 01:30:34.175661     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:34 ha-202151 kubelet[802]: E1217 01:30:34.175845     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:38 ha-202151 kubelet[802]: I1217 01:30:38.984627     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:38 ha-202151 kubelet[802]: E1217 01:30:38.985748     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(db1e59c0-7387-4c55-b417-dd3dd6c4a2e0)\"" pod="kube-system/storage-provisioner" podUID="db1e59c0-7387-4c55-b417-dd3dd6c4a2e0"
	Dec 17 01:30:47 ha-202151 kubelet[802]: I1217 01:30:47.984399     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:47 ha-202151 kubelet[802]: E1217 01:30:47.984633     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:30:52 ha-202151 kubelet[802]: I1217 01:30:52.985253     802 scope.go:117] "RemoveContainer" containerID="e2674511b7c44f8e646c4fa6706f1ca1c1113f09a1650ea72ee1c2e303478fe1"
	Dec 17 01:30:58 ha-202151 kubelet[802]: I1217 01:30:58.984851     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	Dec 17 01:30:58 ha-202151 kubelet[802]: E1217 01:30:58.985050     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-202151_kube-system(b5f944589c1c2eb7957eaa078253c600)\"" pod="kube-system/kube-controller-manager-ha-202151" podUID="b5f944589c1c2eb7957eaa078253c600"
	Dec 17 01:31:09 ha-202151 kubelet[802]: I1217 01:31:09.983912     802 scope.go:117] "RemoveContainer" containerID="3345ee69cef2f24791746b484b27d6b12a3fd4bcc73af2fa99c06182d26b0317"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-202151 -n ha-202151
helpers_test.go:270: (dbg) Run:  kubectl --context ha-202151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (6.03s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.9s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-639976 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-639976 --output=json --user=testUser: exit status 80 (1.901753825s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f82fa5a9-c17f-4fef-a93e-f9c98488f94f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-639976 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"379f7ff6-6112-4808-b1f5-95fbabe1eac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T01:39:34Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"0bd6fb08-be65-4f9e-bafe-55692c2ada89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-639976 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.12s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-639976 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-639976 --output=json --user=testUser: exit status 80 (2.117377226s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0712443-7c70-4b82-98c1-a36071faa396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-639976 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"076b88ab-9886-44bf-b059-fe71cc97201a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T01:39:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"2756a986-8371-4b1b-95d6-5ff121c050b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-639976 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (794.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.458724012s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-813956
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-813956: (1.474469028s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-813956 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-813956 status --format={{.Host}}: exit status 7 (98.358152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m22.7448343s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-813956] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-813956" primary control-plane node in "kubernetes-upgrade-813956" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:58:26.644334 1329399 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:58:26.644755 1329399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:58:26.644791 1329399 out.go:374] Setting ErrFile to fd 2...
	I1217 01:58:26.644811 1329399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:58:26.645295 1329399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:58:26.645853 1329399 out.go:368] Setting JSON to false
	I1217 01:58:26.646924 1329399 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27657,"bootTime":1765909050,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:58:26.647056 1329399 start.go:143] virtualization:  
	I1217 01:58:26.650176 1329399 out.go:179] * [kubernetes-upgrade-813956] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:58:26.653162 1329399 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:58:26.655022 1329399 notify.go:221] Checking for updates...
	I1217 01:58:26.659736 1329399 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:58:26.662756 1329399 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:58:26.665547 1329399 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:58:26.668410 1329399 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:58:26.671212 1329399 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:58:26.674591 1329399 config.go:182] Loaded profile config "kubernetes-upgrade-813956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 01:58:26.675144 1329399 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:58:26.718092 1329399 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:58:26.718290 1329399 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:58:26.828638 1329399 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 01:58:26.818207624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:58:26.828746 1329399 docker.go:319] overlay module found
	I1217 01:58:26.834478 1329399 out.go:179] * Using the docker driver based on existing profile
	I1217 01:58:26.837279 1329399 start.go:309] selected driver: docker
	I1217 01:58:26.837296 1329399 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-813956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-813956 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:58:26.837389 1329399 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:58:26.838069 1329399 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:58:26.929943 1329399 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 01:58:26.920826051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:58:26.930265 1329399 cni.go:84] Creating CNI manager for ""
	I1217 01:58:26.930320 1329399 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 01:58:26.930359 1329399 start.go:353] cluster config:
	{Name:kubernetes-upgrade-813956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-813956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:58:26.933498 1329399 out.go:179] * Starting "kubernetes-upgrade-813956" primary control-plane node in "kubernetes-upgrade-813956" cluster
	I1217 01:58:26.936373 1329399 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 01:58:26.939226 1329399 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:58:26.942641 1329399 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 01:58:26.942691 1329399 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 01:58:26.942702 1329399 cache.go:65] Caching tarball of preloaded images
	I1217 01:58:26.942785 1329399 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 01:58:26.942795 1329399 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 01:58:26.942905 1329399 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/config.json ...
	I1217 01:58:26.943114 1329399 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:58:26.982736 1329399 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:58:26.982762 1329399 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:58:26.982777 1329399 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:58:26.982808 1329399 start.go:360] acquireMachinesLock for kubernetes-upgrade-813956: {Name:mk348b77a1a91ade9311200d15d882a90a7b92d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:58:26.982857 1329399 start.go:364] duration metric: took 33.148µs to acquireMachinesLock for "kubernetes-upgrade-813956"
	I1217 01:58:26.982876 1329399 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:58:26.982881 1329399 fix.go:54] fixHost starting: 
	I1217 01:58:26.983144 1329399 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-813956 --format={{.State.Status}}
	I1217 01:58:27.011484 1329399 fix.go:112] recreateIfNeeded on kubernetes-upgrade-813956: state=Stopped err=<nil>
	W1217 01:58:27.011513 1329399 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:58:27.014740 1329399 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-813956" ...
	I1217 01:58:27.014825 1329399 cli_runner.go:164] Run: docker start kubernetes-upgrade-813956
	I1217 01:58:27.365798 1329399 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-813956 --format={{.State.Status}}
	I1217 01:58:27.396114 1329399 kic.go:430] container "kubernetes-upgrade-813956" state is running.
	I1217 01:58:27.396600 1329399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-813956
	I1217 01:58:27.427868 1329399 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/config.json ...
	I1217 01:58:27.428093 1329399 machine.go:94] provisionDockerMachine start ...
	I1217 01:58:27.428161 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:27.453509 1329399 main.go:143] libmachine: Using SSH client type: native
	I1217 01:58:27.457294 1329399 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34133 <nil> <nil>}
	I1217 01:58:27.457316 1329399 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:58:27.459871 1329399 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:58:30.625541 1329399 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-813956
	
	I1217 01:58:30.625582 1329399 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-813956"
	I1217 01:58:30.625662 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:30.652070 1329399 main.go:143] libmachine: Using SSH client type: native
	I1217 01:58:30.652563 1329399 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34133 <nil> <nil>}
	I1217 01:58:30.652583 1329399 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-813956 && echo "kubernetes-upgrade-813956" | sudo tee /etc/hostname
	I1217 01:58:30.816190 1329399 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-813956
	
	I1217 01:58:30.816310 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:30.838694 1329399 main.go:143] libmachine: Using SSH client type: native
	I1217 01:58:30.839007 1329399 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34133 <nil> <nil>}
	I1217 01:58:30.839025 1329399 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-813956' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-813956/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-813956' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:58:30.988992 1329399 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:58:30.989031 1329399 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 01:58:30.989064 1329399 ubuntu.go:190] setting up certificates
	I1217 01:58:30.989090 1329399 provision.go:84] configureAuth start
	I1217 01:58:30.989163 1329399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-813956
	I1217 01:58:31.012950 1329399 provision.go:143] copyHostCerts
	I1217 01:58:31.013033 1329399 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 01:58:31.013048 1329399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 01:58:31.013124 1329399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 01:58:31.013231 1329399 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 01:58:31.013243 1329399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 01:58:31.013273 1329399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 01:58:31.013335 1329399 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 01:58:31.013346 1329399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 01:58:31.013370 1329399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 01:58:31.013421 1329399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-813956 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-813956 localhost minikube]
	I1217 01:58:31.299466 1329399 provision.go:177] copyRemoteCerts
	I1217 01:58:31.299543 1329399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:58:31.299598 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:31.322591 1329399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34133 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/kubernetes-upgrade-813956/id_rsa Username:docker}
	I1217 01:58:31.434504 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 01:58:31.464997 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 01:58:31.485752 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:58:31.506489 1329399 provision.go:87] duration metric: took 517.379967ms to configureAuth
	I1217 01:58:31.506530 1329399 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:58:31.506732 1329399 config.go:182] Loaded profile config "kubernetes-upgrade-813956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 01:58:31.506851 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:31.526268 1329399 main.go:143] libmachine: Using SSH client type: native
	I1217 01:58:31.526589 1329399 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34133 <nil> <nil>}
	I1217 01:58:31.526613 1329399 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:58:31.906126 1329399 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:58:31.906152 1329399 machine.go:97] duration metric: took 4.478049469s to provisionDockerMachine
	I1217 01:58:31.906164 1329399 start.go:293] postStartSetup for "kubernetes-upgrade-813956" (driver="docker")
	I1217 01:58:31.906177 1329399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:58:31.906250 1329399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:58:31.906313 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:31.923387 1329399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34133 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/kubernetes-upgrade-813956/id_rsa Username:docker}
	I1217 01:58:32.033462 1329399 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:58:32.037435 1329399 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:58:32.037461 1329399 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:58:32.037472 1329399 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 01:58:32.037526 1329399 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 01:58:32.037604 1329399 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 01:58:32.037703 1329399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:58:32.046261 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:58:32.066453 1329399 start.go:296] duration metric: took 160.273635ms for postStartSetup
	I1217 01:58:32.066579 1329399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:58:32.066656 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:32.088019 1329399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34133 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/kubernetes-upgrade-813956/id_rsa Username:docker}
	I1217 01:58:32.191740 1329399 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:58:32.198985 1329399 fix.go:56] duration metric: took 5.216096508s for fixHost
	I1217 01:58:32.199028 1329399 start.go:83] releasing machines lock for "kubernetes-upgrade-813956", held for 5.216156035s
	I1217 01:58:32.199125 1329399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-813956
	I1217 01:58:32.223941 1329399 ssh_runner.go:195] Run: cat /version.json
	I1217 01:58:32.223995 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:32.224240 1329399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:58:32.224296 1329399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-813956
	I1217 01:58:32.251981 1329399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34133 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/kubernetes-upgrade-813956/id_rsa Username:docker}
	I1217 01:58:32.264741 1329399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34133 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/kubernetes-upgrade-813956/id_rsa Username:docker}
	I1217 01:58:32.368589 1329399 ssh_runner.go:195] Run: systemctl --version
	I1217 01:58:32.470022 1329399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:58:32.512431 1329399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:58:32.517506 1329399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:58:32.517581 1329399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:58:32.526490 1329399 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:58:32.526517 1329399 start.go:496] detecting cgroup driver to use...
	I1217 01:58:32.526549 1329399 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:58:32.526602 1329399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:58:32.544290 1329399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:58:32.559284 1329399 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:58:32.559373 1329399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:58:32.576808 1329399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:58:32.591489 1329399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:58:32.756408 1329399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:58:32.931632 1329399 docker.go:234] disabling docker service ...
	I1217 01:58:32.931738 1329399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:58:32.949118 1329399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:58:32.963344 1329399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:58:33.125040 1329399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:58:33.280442 1329399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:58:33.296188 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:58:33.312123 1329399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:58:33.312219 1329399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:58:33.321976 1329399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:58:33.322095 1329399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:58:33.331787 1329399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:58:33.342442 1329399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:58:33.351916 1329399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:58:33.360792 1329399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:58:33.370524 1329399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:58:33.379684 1329399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:58:33.389424 1329399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:58:33.398236 1329399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:58:33.406806 1329399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:58:33.555123 1329399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:58:33.763063 1329399 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:58:33.763151 1329399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:58:33.767771 1329399 start.go:564] Will wait 60s for crictl version
	I1217 01:58:33.767857 1329399 ssh_runner.go:195] Run: which crictl
	I1217 01:58:33.771874 1329399 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:58:33.805270 1329399 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 01:58:33.805364 1329399 ssh_runner.go:195] Run: crio --version
	I1217 01:58:33.860889 1329399 ssh_runner.go:195] Run: crio --version
	I1217 01:58:33.897028 1329399 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1217 01:58:33.899948 1329399 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-813956 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:58:33.924860 1329399 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 01:58:33.929527 1329399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:58:33.942762 1329399 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-813956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-813956 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:58:33.942878 1329399 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 01:58:33.942943 1329399 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:58:33.978904 1329399 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1217 01:58:33.978975 1329399 ssh_runner.go:195] Run: which lz4
	I1217 01:58:33.982886 1329399 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 01:58:33.986615 1329399 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 01:58:33.986651 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1217 01:58:37.111889 1329399 crio.go:462] duration metric: took 3.129043609s to copy over tarball
	I1217 01:58:37.111977 1329399 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 01:58:39.678354 1329399 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.566347305s)
	I1217 01:58:39.678384 1329399 crio.go:469] duration metric: took 2.566467377s to extract the tarball
	I1217 01:58:39.678393 1329399 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 01:58:39.742973 1329399 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:58:39.780467 1329399 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:58:39.780489 1329399 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:58:39.780497 1329399 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1217 01:58:39.780611 1329399 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-813956 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-813956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:58:39.780699 1329399 ssh_runner.go:195] Run: crio config
	I1217 01:58:39.949035 1329399 cni.go:84] Creating CNI manager for ""
	I1217 01:58:39.949056 1329399 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 01:58:39.949075 1329399 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:58:39.949106 1329399 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-813956 NodeName:kubernetes-upgrade-813956 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:58:39.949240 1329399 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-813956"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:58:39.949316 1329399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:58:39.958179 1329399 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:58:39.958265 1329399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:58:39.967330 1329399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1217 01:58:39.981378 1329399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:58:39.994829 1329399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1217 01:58:40.014039 1329399 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:58:40.019584 1329399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:58:40.032071 1329399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:58:40.258801 1329399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:58:40.284365 1329399 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956 for IP: 192.168.76.2
	I1217 01:58:40.284387 1329399 certs.go:195] generating shared ca certs ...
	I1217 01:58:40.284403 1329399 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:58:40.284627 1329399 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 01:58:40.284681 1329399 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 01:58:40.284694 1329399 certs.go:257] generating profile certs ...
	I1217 01:58:40.284790 1329399 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/client.key
	I1217 01:58:40.284853 1329399 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/apiserver.key.e02b430e
	I1217 01:58:40.284906 1329399 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/proxy-client.key
	I1217 01:58:40.285029 1329399 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 01:58:40.285066 1329399 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 01:58:40.285080 1329399 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 01:58:40.285110 1329399 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 01:58:40.285141 1329399 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:58:40.285167 1329399 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 01:58:40.285231 1329399 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 01:58:40.285901 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:58:40.351585 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:58:40.391180 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:58:40.413252 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:58:40.431957 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 01:58:40.457824 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:58:40.478257 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:58:40.497245 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:58:40.516988 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 01:58:40.537256 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:58:40.555426 1329399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 01:58:40.573900 1329399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:58:40.614801 1329399 ssh_runner.go:195] Run: openssl version
	I1217 01:58:40.625320 1329399 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 01:58:40.637671 1329399 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 01:58:40.650406 1329399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 01:58:40.654888 1329399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 01:58:40.654962 1329399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 01:58:40.717021 1329399 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:58:40.734391 1329399 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:58:40.745672 1329399 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:58:40.757686 1329399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:58:40.761542 1329399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:58:40.761616 1329399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:58:40.813156 1329399 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:58:40.821277 1329399 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 01:58:40.829257 1329399 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 01:58:40.837444 1329399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 01:58:40.841963 1329399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 01:58:40.842077 1329399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 01:58:40.889253 1329399 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:58:40.897146 1329399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:58:40.905076 1329399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:58:40.983703 1329399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:58:41.050398 1329399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:58:41.097073 1329399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:58:41.161211 1329399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:58:41.228027 1329399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:58:41.277311 1329399 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-813956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-813956 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:58:41.277460 1329399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:58:41.277566 1329399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:58:41.331065 1329399 cri.go:89] found id: ""
	I1217 01:58:41.331213 1329399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:58:41.343997 1329399 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:58:41.344057 1329399 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:58:41.344151 1329399 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:58:41.352724 1329399 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:58:41.353265 1329399 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-813956" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:58:41.353431 1329399 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-1134739/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-813956" cluster setting kubeconfig missing "kubernetes-upgrade-813956" context setting]
	I1217 01:58:41.353790 1329399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:58:41.354445 1329399 kapi.go:59] client config for kubernetes-upgrade-813956: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/kubernetes-upgrade-813956/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:58:41.355218 1329399 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:58:41.355269 1329399 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:58:41.355292 1329399 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:58:41.355314 1329399 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:58:41.355353 1329399 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:58:41.355714 1329399 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:58:41.368004 1329399 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 01:57:57.951133920 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 01:58:40.006236050 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-813956"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1217 01:58:41.368074 1329399 kubeadm.go:1161] stopping kube-system containers ...
	I1217 01:58:41.368100 1329399 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 01:58:41.368185 1329399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:58:41.412176 1329399 cri.go:89] found id: ""
	I1217 01:58:41.412318 1329399 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 01:58:41.429085 1329399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:58:41.447153 1329399 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 17 01:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 17 01:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 17 01:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 17 01:58 /etc/kubernetes/scheduler.conf
	
	I1217 01:58:41.447288 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:58:41.459975 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:58:41.473584 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:58:41.485531 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:58:41.485659 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:58:41.498519 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:58:41.509854 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:58:41.509974 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:58:41.521009 1329399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:58:41.537285 1329399 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:58:41.622155 1329399 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:58:43.209326 1329399 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.587088242s)
	I1217 01:58:43.209391 1329399 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:58:43.526622 1329399 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:58:43.658227 1329399 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:58:43.767694 1329399 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:58:43.767790 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:44.268061 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:44.768475 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:45.268688 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:45.767907 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:46.268751 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:46.768490 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:47.268002 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:47.768702 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:48.268204 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:48.768788 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:49.268464 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:49.767900 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:50.268462 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:50.767910 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:51.268266 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:51.767850 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:52.268636 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:52.767986 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:53.267878 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:53.768000 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:54.267999 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:54.767961 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:55.267935 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:55.768912 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:56.268701 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:56.767938 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:57.268779 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:57.768146 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:58.267983 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:58.767910 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:59.267923 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:58:59.767930 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:00.284693 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:00.767902 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:01.267899 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:01.768823 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:02.268540 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:02.768511 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:03.268526 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:03.768589 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:04.268624 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:04.768182 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:05.268724 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:05.768619 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:06.268723 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:06.768861 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:07.267919 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:07.767946 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:08.268683 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:08.767930 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:09.268694 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:09.768225 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:10.268459 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:10.768756 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:11.267879 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:11.767947 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:12.267903 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:12.767931 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:13.268647 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:13.767989 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:14.268878 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:14.768662 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:15.268561 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:15.768634 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:16.268230 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:16.768732 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:17.268506 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:17.767953 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:18.267915 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:18.768818 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:19.268830 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:19.767927 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:20.268010 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:20.768896 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:21.267852 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:21.768507 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:22.268346 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:22.768864 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:23.268848 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:23.768560 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:24.267917 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:24.767979 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:25.267967 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:25.768488 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:26.268625 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:26.768064 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:27.267942 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:27.767895 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:28.268784 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:28.768466 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:29.268610 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:29.768863 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:30.268620 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:30.767918 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:31.267867 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:31.767884 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:32.268205 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:32.768100 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:33.268261 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:33.768751 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:34.267937 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:34.768453 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:35.268656 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:35.768836 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:36.268764 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:36.768625 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:37.268330 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:37.768770 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:38.268296 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:38.767904 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:39.268273 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:39.768667 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:40.268664 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:40.767907 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:41.268796 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:41.768108 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:42.267933 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:42.768355 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:43.267970 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:43.767847 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:59:43.767932 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:59:43.801899 1329399 cri.go:89] found id: ""
	I1217 01:59:43.801922 1329399 logs.go:282] 0 containers: []
	W1217 01:59:43.801930 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:59:43.801937 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:59:43.801997 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:59:43.831017 1329399 cri.go:89] found id: ""
	I1217 01:59:43.831097 1329399 logs.go:282] 0 containers: []
	W1217 01:59:43.831120 1329399 logs.go:284] No container was found matching "etcd"
	I1217 01:59:43.831143 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:59:43.831234 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:59:43.858056 1329399 cri.go:89] found id: ""
	I1217 01:59:43.858084 1329399 logs.go:282] 0 containers: []
	W1217 01:59:43.858093 1329399 logs.go:284] No container was found matching "coredns"
	I1217 01:59:43.858100 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:59:43.858160 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:59:43.889059 1329399 cri.go:89] found id: ""
	I1217 01:59:43.889087 1329399 logs.go:282] 0 containers: []
	W1217 01:59:43.889097 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:59:43.889104 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:59:43.889182 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:59:43.921458 1329399 cri.go:89] found id: ""
	I1217 01:59:43.921483 1329399 logs.go:282] 0 containers: []
	W1217 01:59:43.921492 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:59:43.921498 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:59:43.921563 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:59:43.947924 1329399 cri.go:89] found id: ""
	I1217 01:59:43.947948 1329399 logs.go:282] 0 containers: []
	W1217 01:59:43.947957 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:59:43.947963 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:59:43.948024 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:59:43.973673 1329399 cri.go:89] found id: ""
	I1217 01:59:43.973701 1329399 logs.go:282] 0 containers: []
	W1217 01:59:43.973710 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 01:59:43.973716 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:59:43.973772 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:59:44.000568 1329399 cri.go:89] found id: ""
	I1217 01:59:44.000596 1329399 logs.go:282] 0 containers: []
	W1217 01:59:44.000605 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:59:44.000614 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 01:59:44.000625 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:59:44.071216 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 01:59:44.071253 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:59:44.090666 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:59:44.090704 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:59:44.282912 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:59:44.282932 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:59:44.282945 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:59:44.314754 1329399 logs.go:123] Gathering logs for container status ...
	I1217 01:59:44.314792 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:59:46.847111 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:46.857232 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:59:46.857301 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:59:46.883285 1329399 cri.go:89] found id: ""
	I1217 01:59:46.883307 1329399 logs.go:282] 0 containers: []
	W1217 01:59:46.883326 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:59:46.883333 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:59:46.883389 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:59:46.908617 1329399 cri.go:89] found id: ""
	I1217 01:59:46.908644 1329399 logs.go:282] 0 containers: []
	W1217 01:59:46.908654 1329399 logs.go:284] No container was found matching "etcd"
	I1217 01:59:46.908663 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:59:46.908721 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:59:46.939330 1329399 cri.go:89] found id: ""
	I1217 01:59:46.939358 1329399 logs.go:282] 0 containers: []
	W1217 01:59:46.939368 1329399 logs.go:284] No container was found matching "coredns"
	I1217 01:59:46.939375 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:59:46.939434 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:59:46.968515 1329399 cri.go:89] found id: ""
	I1217 01:59:46.968546 1329399 logs.go:282] 0 containers: []
	W1217 01:59:46.968555 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:59:46.968561 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:59:46.968621 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:59:46.993724 1329399 cri.go:89] found id: ""
	I1217 01:59:46.993749 1329399 logs.go:282] 0 containers: []
	W1217 01:59:46.993759 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:59:46.993766 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:59:46.993824 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:59:47.031539 1329399 cri.go:89] found id: ""
	I1217 01:59:47.031582 1329399 logs.go:282] 0 containers: []
	W1217 01:59:47.031592 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:59:47.031599 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:59:47.031669 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:59:47.057321 1329399 cri.go:89] found id: ""
	I1217 01:59:47.057348 1329399 logs.go:282] 0 containers: []
	W1217 01:59:47.057357 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 01:59:47.057364 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:59:47.057440 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:59:47.083812 1329399 cri.go:89] found id: ""
	I1217 01:59:47.083849 1329399 logs.go:282] 0 containers: []
	W1217 01:59:47.083859 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:59:47.083868 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 01:59:47.083879 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:59:47.161352 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 01:59:47.161390 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:59:47.179154 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:59:47.179184 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:59:47.251723 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:59:47.251745 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:59:47.251763 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:59:47.282185 1329399 logs.go:123] Gathering logs for container status ...
	I1217 01:59:47.282220 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:59:49.815457 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:49.825731 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:59:49.825799 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:59:49.861634 1329399 cri.go:89] found id: ""
	I1217 01:59:49.861674 1329399 logs.go:282] 0 containers: []
	W1217 01:59:49.861684 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:59:49.861691 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:59:49.861750 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:59:49.887288 1329399 cri.go:89] found id: ""
	I1217 01:59:49.887318 1329399 logs.go:282] 0 containers: []
	W1217 01:59:49.887334 1329399 logs.go:284] No container was found matching "etcd"
	I1217 01:59:49.887341 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:59:49.887402 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:59:49.912913 1329399 cri.go:89] found id: ""
	I1217 01:59:49.912936 1329399 logs.go:282] 0 containers: []
	W1217 01:59:49.912945 1329399 logs.go:284] No container was found matching "coredns"
	I1217 01:59:49.912951 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:59:49.913012 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:59:49.939276 1329399 cri.go:89] found id: ""
	I1217 01:59:49.939301 1329399 logs.go:282] 0 containers: []
	W1217 01:59:49.939309 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:59:49.939315 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:59:49.939371 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:59:49.966417 1329399 cri.go:89] found id: ""
	I1217 01:59:49.966443 1329399 logs.go:282] 0 containers: []
	W1217 01:59:49.966453 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:59:49.966459 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:59:49.966519 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:59:49.997180 1329399 cri.go:89] found id: ""
	I1217 01:59:49.997230 1329399 logs.go:282] 0 containers: []
	W1217 01:59:49.997240 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:59:49.997247 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:59:49.997311 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:59:50.033111 1329399 cri.go:89] found id: ""
	I1217 01:59:50.033141 1329399 logs.go:282] 0 containers: []
	W1217 01:59:50.033151 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 01:59:50.033158 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:59:50.033219 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:59:50.060576 1329399 cri.go:89] found id: ""
	I1217 01:59:50.060595 1329399 logs.go:282] 0 containers: []
	W1217 01:59:50.060618 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:59:50.060630 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 01:59:50.060642 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:59:50.132324 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 01:59:50.132362 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:59:50.151431 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:59:50.151470 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:59:50.219517 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:59:50.219583 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:59:50.219613 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:59:50.250119 1329399 logs.go:123] Gathering logs for container status ...
	I1217 01:59:50.250154 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:59:52.778708 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:52.790588 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:59:52.790662 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:59:52.825469 1329399 cri.go:89] found id: ""
	I1217 01:59:52.825496 1329399 logs.go:282] 0 containers: []
	W1217 01:59:52.825507 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:59:52.825514 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:59:52.825574 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:59:52.852498 1329399 cri.go:89] found id: ""
	I1217 01:59:52.852524 1329399 logs.go:282] 0 containers: []
	W1217 01:59:52.852534 1329399 logs.go:284] No container was found matching "etcd"
	I1217 01:59:52.852541 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:59:52.852599 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:59:52.877238 1329399 cri.go:89] found id: ""
	I1217 01:59:52.877260 1329399 logs.go:282] 0 containers: []
	W1217 01:59:52.877269 1329399 logs.go:284] No container was found matching "coredns"
	I1217 01:59:52.877275 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:59:52.877330 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:59:52.902198 1329399 cri.go:89] found id: ""
	I1217 01:59:52.902220 1329399 logs.go:282] 0 containers: []
	W1217 01:59:52.902228 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:59:52.902235 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:59:52.902297 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:59:52.927350 1329399 cri.go:89] found id: ""
	I1217 01:59:52.927372 1329399 logs.go:282] 0 containers: []
	W1217 01:59:52.927381 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:59:52.927387 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:59:52.927448 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:59:52.959888 1329399 cri.go:89] found id: ""
	I1217 01:59:52.959974 1329399 logs.go:282] 0 containers: []
	W1217 01:59:52.959998 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:59:52.960018 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:59:52.960126 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:59:52.986439 1329399 cri.go:89] found id: ""
	I1217 01:59:52.986463 1329399 logs.go:282] 0 containers: []
	W1217 01:59:52.986472 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 01:59:52.986478 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:59:52.986536 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:59:53.018091 1329399 cri.go:89] found id: ""
	I1217 01:59:53.018116 1329399 logs.go:282] 0 containers: []
	W1217 01:59:53.018131 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:59:53.018139 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 01:59:53.018151 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:59:53.036288 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:59:53.036540 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:59:53.109244 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:59:53.109265 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:59:53.109277 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:59:53.140143 1329399 logs.go:123] Gathering logs for container status ...
	I1217 01:59:53.140177 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:59:53.173786 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 01:59:53.173813 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:59:55.741127 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:55.752669 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:59:55.752798 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:59:55.783322 1329399 cri.go:89] found id: ""
	I1217 01:59:55.783348 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.783357 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:59:55.783364 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:59:55.783453 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:59:55.813124 1329399 cri.go:89] found id: ""
	I1217 01:59:55.813151 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.813160 1329399 logs.go:284] No container was found matching "etcd"
	I1217 01:59:55.813166 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:59:55.813274 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:59:55.844702 1329399 cri.go:89] found id: ""
	I1217 01:59:55.844783 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.844797 1329399 logs.go:284] No container was found matching "coredns"
	I1217 01:59:55.844805 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:59:55.844881 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:59:55.872405 1329399 cri.go:89] found id: ""
	I1217 01:59:55.872519 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.872545 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:59:55.872569 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:59:55.872659 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:59:55.902835 1329399 cri.go:89] found id: ""
	I1217 01:59:55.902899 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.902922 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:59:55.902942 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:59:55.903016 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:59:55.927985 1329399 cri.go:89] found id: ""
	I1217 01:59:55.928063 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.928087 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:59:55.928108 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:59:55.928184 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:59:55.954022 1329399 cri.go:89] found id: ""
	I1217 01:59:55.954098 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.954115 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 01:59:55.954122 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:59:55.954190 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:59:55.978747 1329399 cri.go:89] found id: ""
	I1217 01:59:55.978774 1329399 logs.go:282] 0 containers: []
	W1217 01:59:55.978783 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:59:55.978792 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 01:59:55.978803 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:59:56.045761 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 01:59:56.045819 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:59:56.064896 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:59:56.064926 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:59:56.133288 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:59:56.133312 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:59:56.133327 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:59:56.163610 1329399 logs.go:123] Gathering logs for container status ...
	I1217 01:59:56.163644 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:59:58.693531 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:59:58.703721 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:59:58.703795 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:59:58.730252 1329399 cri.go:89] found id: ""
	I1217 01:59:58.730274 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.730283 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:59:58.730289 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:59:58.730346 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:59:58.768946 1329399 cri.go:89] found id: ""
	I1217 01:59:58.768971 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.768981 1329399 logs.go:284] No container was found matching "etcd"
	I1217 01:59:58.768987 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:59:58.769047 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:59:58.806794 1329399 cri.go:89] found id: ""
	I1217 01:59:58.806821 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.806830 1329399 logs.go:284] No container was found matching "coredns"
	I1217 01:59:58.806837 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:59:58.806901 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:59:58.846135 1329399 cri.go:89] found id: ""
	I1217 01:59:58.846158 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.846166 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:59:58.846172 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:59:58.846238 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:59:58.873302 1329399 cri.go:89] found id: ""
	I1217 01:59:58.873331 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.873341 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:59:58.873347 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:59:58.873412 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:59:58.899063 1329399 cri.go:89] found id: ""
	I1217 01:59:58.899091 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.899101 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:59:58.899108 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:59:58.899171 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:59:58.923787 1329399 cri.go:89] found id: ""
	I1217 01:59:58.923815 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.923824 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 01:59:58.923830 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:59:58.923895 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:59:58.951792 1329399 cri.go:89] found id: ""
	I1217 01:59:58.951820 1329399 logs.go:282] 0 containers: []
	W1217 01:59:58.951829 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:59:58.951839 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:59:58.951851 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:59:58.983146 1329399 logs.go:123] Gathering logs for container status ...
	I1217 01:59:58.983181 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:59:59.014033 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 01:59:59.014060 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:59:59.085450 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 01:59:59.085489 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:59:59.103351 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:59:59.103387 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:59:59.169535 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:01.680655 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:01.716594 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:01.716678 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:01.806636 1329399 cri.go:89] found id: ""
	I1217 02:00:01.806660 1329399 logs.go:282] 0 containers: []
	W1217 02:00:01.806676 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:01.806683 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:01.806748 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:01.882054 1329399 cri.go:89] found id: ""
	I1217 02:00:01.882083 1329399 logs.go:282] 0 containers: []
	W1217 02:00:01.882091 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:01.882099 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:01.882169 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:01.955166 1329399 cri.go:89] found id: ""
	I1217 02:00:01.955202 1329399 logs.go:282] 0 containers: []
	W1217 02:00:01.955213 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:01.955221 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:01.955298 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:02.028565 1329399 cri.go:89] found id: ""
	I1217 02:00:02.028613 1329399 logs.go:282] 0 containers: []
	W1217 02:00:02.028624 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:02.028631 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:02.028709 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:02.079999 1329399 cri.go:89] found id: ""
	I1217 02:00:02.080054 1329399 logs.go:282] 0 containers: []
	W1217 02:00:02.080065 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:02.080073 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:02.080160 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:02.115149 1329399 cri.go:89] found id: ""
	I1217 02:00:02.115190 1329399 logs.go:282] 0 containers: []
	W1217 02:00:02.115201 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:02.115207 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:02.115289 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:02.157704 1329399 cri.go:89] found id: ""
	I1217 02:00:02.157747 1329399 logs.go:282] 0 containers: []
	W1217 02:00:02.157757 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:02.157765 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:02.157842 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:02.192631 1329399 cri.go:89] found id: ""
	I1217 02:00:02.192664 1329399 logs.go:282] 0 containers: []
	W1217 02:00:02.192675 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:02.192694 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:02.192708 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:02.214853 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:02.214900 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:02.364170 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:02.364197 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:02.364221 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:02.405678 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:02.405730 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:02.444080 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:02.444116 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:05.025111 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:05.035822 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:05.035906 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:05.073422 1329399 cri.go:89] found id: ""
	I1217 02:00:05.073448 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.073456 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:05.073463 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:05.073521 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:05.102435 1329399 cri.go:89] found id: ""
	I1217 02:00:05.102459 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.102468 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:05.102474 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:05.102534 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:05.131616 1329399 cri.go:89] found id: ""
	I1217 02:00:05.131643 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.131652 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:05.131658 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:05.131763 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:05.157834 1329399 cri.go:89] found id: ""
	I1217 02:00:05.157862 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.157872 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:05.157879 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:05.157988 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:05.187579 1329399 cri.go:89] found id: ""
	I1217 02:00:05.187607 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.187622 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:05.187629 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:05.187694 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:05.215580 1329399 cri.go:89] found id: ""
	I1217 02:00:05.215604 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.215613 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:05.215620 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:05.215680 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:05.243423 1329399 cri.go:89] found id: ""
	I1217 02:00:05.243452 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.243461 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:05.243468 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:05.243534 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:05.274628 1329399 cri.go:89] found id: ""
	I1217 02:00:05.274655 1329399 logs.go:282] 0 containers: []
	W1217 02:00:05.274666 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:05.274675 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:05.274699 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:05.307836 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:05.307867 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:05.383850 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:05.383905 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:05.402405 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:05.402437 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:05.471148 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:05.471214 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:05.471232 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:08.010272 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:08.021286 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:08.021360 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:08.052812 1329399 cri.go:89] found id: ""
	I1217 02:00:08.052849 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.052863 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:08.052872 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:08.052944 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:08.079609 1329399 cri.go:89] found id: ""
	I1217 02:00:08.079638 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.079648 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:08.079655 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:08.079718 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:08.107601 1329399 cri.go:89] found id: ""
	I1217 02:00:08.107629 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.107637 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:08.107644 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:08.107704 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:08.134442 1329399 cri.go:89] found id: ""
	I1217 02:00:08.134469 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.134479 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:08.134486 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:08.134547 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:08.163181 1329399 cri.go:89] found id: ""
	I1217 02:00:08.163203 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.163211 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:08.163218 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:08.163277 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:08.192981 1329399 cri.go:89] found id: ""
	I1217 02:00:08.193010 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.193020 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:08.193027 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:08.193110 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:08.219718 1329399 cri.go:89] found id: ""
	I1217 02:00:08.219742 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.219751 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:08.219757 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:08.219854 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:08.248832 1329399 cri.go:89] found id: ""
	I1217 02:00:08.248855 1329399 logs.go:282] 0 containers: []
	W1217 02:00:08.248864 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:08.248873 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:08.248896 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:08.291864 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:08.291890 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:08.368443 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:08.368486 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:08.387569 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:08.387600 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:08.456788 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:08.456868 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:08.456891 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:10.989063 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:10.999335 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:10.999405 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:11.031395 1329399 cri.go:89] found id: ""
	I1217 02:00:11.031422 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.031431 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:11.031437 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:11.031496 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:11.057916 1329399 cri.go:89] found id: ""
	I1217 02:00:11.057947 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.057957 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:11.057963 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:11.058022 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:11.088563 1329399 cri.go:89] found id: ""
	I1217 02:00:11.088590 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.088599 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:11.088605 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:11.088677 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:11.116097 1329399 cri.go:89] found id: ""
	I1217 02:00:11.116122 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.116132 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:11.116138 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:11.116198 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:11.144079 1329399 cri.go:89] found id: ""
	I1217 02:00:11.144107 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.144116 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:11.144123 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:11.144185 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:11.172355 1329399 cri.go:89] found id: ""
	I1217 02:00:11.172461 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.172495 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:11.172513 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:11.172585 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:11.200480 1329399 cri.go:89] found id: ""
	I1217 02:00:11.200506 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.200516 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:11.200523 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:11.200588 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:11.227486 1329399 cri.go:89] found id: ""
	I1217 02:00:11.227512 1329399 logs.go:282] 0 containers: []
	W1217 02:00:11.227521 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:11.227531 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:11.227545 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:11.245608 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:11.245641 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:11.330235 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:11.330300 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:11.330320 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:11.362005 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:11.362040 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:11.395750 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:11.395780 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:13.964944 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:13.975968 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:13.976037 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:14.012146 1329399 cri.go:89] found id: ""
	I1217 02:00:14.012231 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.012273 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:14.012295 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:14.012392 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:14.039819 1329399 cri.go:89] found id: ""
	I1217 02:00:14.039843 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.039852 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:14.039858 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:14.039922 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:14.067796 1329399 cri.go:89] found id: ""
	I1217 02:00:14.067821 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.067830 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:14.067836 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:14.067910 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:14.094107 1329399 cri.go:89] found id: ""
	I1217 02:00:14.094135 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.094144 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:14.094151 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:14.094211 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:14.123477 1329399 cri.go:89] found id: ""
	I1217 02:00:14.123505 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.123514 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:14.123521 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:14.123580 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:14.155051 1329399 cri.go:89] found id: ""
	I1217 02:00:14.155072 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.155082 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:14.155088 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:14.155146 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:14.181803 1329399 cri.go:89] found id: ""
	I1217 02:00:14.181826 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.181835 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:14.181842 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:14.181899 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:14.208655 1329399 cri.go:89] found id: ""
	I1217 02:00:14.208685 1329399 logs.go:282] 0 containers: []
	W1217 02:00:14.208694 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:14.208703 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:14.208717 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:14.279064 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:14.279106 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:14.303863 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:14.303903 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:14.383353 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:14.383379 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:14.383395 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:14.414126 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:14.414164 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:16.948989 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:16.959282 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:16.959352 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:16.990012 1329399 cri.go:89] found id: ""
	I1217 02:00:16.990044 1329399 logs.go:282] 0 containers: []
	W1217 02:00:16.990054 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:16.990061 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:16.990122 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:17.018696 1329399 cri.go:89] found id: ""
	I1217 02:00:17.018724 1329399 logs.go:282] 0 containers: []
	W1217 02:00:17.018733 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:17.018752 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:17.018823 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:17.046396 1329399 cri.go:89] found id: ""
	I1217 02:00:17.046421 1329399 logs.go:282] 0 containers: []
	W1217 02:00:17.046430 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:17.046436 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:17.046501 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:17.073623 1329399 cri.go:89] found id: ""
	I1217 02:00:17.073691 1329399 logs.go:282] 0 containers: []
	W1217 02:00:17.073708 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:17.073716 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:17.073786 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:17.103065 1329399 cri.go:89] found id: ""
	I1217 02:00:17.103094 1329399 logs.go:282] 0 containers: []
	W1217 02:00:17.103104 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:17.103110 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:17.103173 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:17.130660 1329399 cri.go:89] found id: ""
	I1217 02:00:17.130683 1329399 logs.go:282] 0 containers: []
	W1217 02:00:17.130693 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:17.130703 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:17.130764 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:17.157019 1329399 cri.go:89] found id: ""
	I1217 02:00:17.157044 1329399 logs.go:282] 0 containers: []
	W1217 02:00:17.157053 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:17.157059 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:17.157116 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:17.184475 1329399 cri.go:89] found id: ""
	I1217 02:00:17.184563 1329399 logs.go:282] 0 containers: []
	W1217 02:00:17.184572 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:17.184581 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:17.184592 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:17.217437 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:17.217503 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:17.289865 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:17.289947 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:17.312035 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:17.312111 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:17.393190 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:17.393222 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:17.393252 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:19.924820 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:19.936007 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:19.936076 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:19.967761 1329399 cri.go:89] found id: ""
	I1217 02:00:19.967787 1329399 logs.go:282] 0 containers: []
	W1217 02:00:19.967797 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:19.967804 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:19.967861 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:19.994330 1329399 cri.go:89] found id: ""
	I1217 02:00:19.994353 1329399 logs.go:282] 0 containers: []
	W1217 02:00:19.994363 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:19.994369 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:19.994429 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:20.031899 1329399 cri.go:89] found id: ""
	I1217 02:00:20.031925 1329399 logs.go:282] 0 containers: []
	W1217 02:00:20.031935 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:20.031942 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:20.032007 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:20.067071 1329399 cri.go:89] found id: ""
	I1217 02:00:20.067096 1329399 logs.go:282] 0 containers: []
	W1217 02:00:20.067105 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:20.067112 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:20.067177 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:20.095674 1329399 cri.go:89] found id: ""
	I1217 02:00:20.095700 1329399 logs.go:282] 0 containers: []
	W1217 02:00:20.095710 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:20.095717 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:20.095780 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:20.128990 1329399 cri.go:89] found id: ""
	I1217 02:00:20.129017 1329399 logs.go:282] 0 containers: []
	W1217 02:00:20.129026 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:20.129033 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:20.129094 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:20.156700 1329399 cri.go:89] found id: ""
	I1217 02:00:20.156738 1329399 logs.go:282] 0 containers: []
	W1217 02:00:20.156748 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:20.156778 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:20.156867 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:20.185019 1329399 cri.go:89] found id: ""
	I1217 02:00:20.185055 1329399 logs.go:282] 0 containers: []
	W1217 02:00:20.185065 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:20.185074 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:20.185090 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:20.253200 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:20.253236 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:20.273170 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:20.273203 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:20.346994 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:20.347017 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:20.347030 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:20.378440 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:20.378476 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:22.909254 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:22.919862 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:22.919942 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:22.946634 1329399 cri.go:89] found id: ""
	I1217 02:00:22.946657 1329399 logs.go:282] 0 containers: []
	W1217 02:00:22.946666 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:22.946672 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:22.946729 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:22.972646 1329399 cri.go:89] found id: ""
	I1217 02:00:22.972670 1329399 logs.go:282] 0 containers: []
	W1217 02:00:22.972679 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:22.972685 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:22.972748 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:22.998418 1329399 cri.go:89] found id: ""
	I1217 02:00:22.998444 1329399 logs.go:282] 0 containers: []
	W1217 02:00:22.998453 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:22.998460 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:22.998516 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:23.026278 1329399 cri.go:89] found id: ""
	I1217 02:00:23.026302 1329399 logs.go:282] 0 containers: []
	W1217 02:00:23.026311 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:23.026317 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:23.026384 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:23.057202 1329399 cri.go:89] found id: ""
	I1217 02:00:23.057228 1329399 logs.go:282] 0 containers: []
	W1217 02:00:23.057237 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:23.057244 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:23.057304 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:23.088074 1329399 cri.go:89] found id: ""
	I1217 02:00:23.088102 1329399 logs.go:282] 0 containers: []
	W1217 02:00:23.088112 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:23.088120 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:23.088183 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:23.119798 1329399 cri.go:89] found id: ""
	I1217 02:00:23.119822 1329399 logs.go:282] 0 containers: []
	W1217 02:00:23.119831 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:23.119838 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:23.119903 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:23.147656 1329399 cri.go:89] found id: ""
	I1217 02:00:23.147683 1329399 logs.go:282] 0 containers: []
	W1217 02:00:23.147696 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:23.147705 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:23.147717 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:23.179759 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:23.179839 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:23.217692 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:23.217717 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:23.286975 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:23.287012 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:23.307449 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:23.307480 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:23.385562 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:25.885846 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:25.896028 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:25.896100 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:25.920826 1329399 cri.go:89] found id: ""
	I1217 02:00:25.920854 1329399 logs.go:282] 0 containers: []
	W1217 02:00:25.920863 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:25.920871 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:25.920935 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:25.946463 1329399 cri.go:89] found id: ""
	I1217 02:00:25.946491 1329399 logs.go:282] 0 containers: []
	W1217 02:00:25.946500 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:25.946507 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:25.946567 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:25.971626 1329399 cri.go:89] found id: ""
	I1217 02:00:25.971653 1329399 logs.go:282] 0 containers: []
	W1217 02:00:25.971662 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:25.971669 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:25.971725 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:26.001025 1329399 cri.go:89] found id: ""
	I1217 02:00:26.001056 1329399 logs.go:282] 0 containers: []
	W1217 02:00:26.001065 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:26.001072 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:26.001134 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:26.031503 1329399 cri.go:89] found id: ""
	I1217 02:00:26.031528 1329399 logs.go:282] 0 containers: []
	W1217 02:00:26.031537 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:26.031544 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:26.031602 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:26.062767 1329399 cri.go:89] found id: ""
	I1217 02:00:26.062793 1329399 logs.go:282] 0 containers: []
	W1217 02:00:26.062802 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:26.062808 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:26.062881 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:26.089022 1329399 cri.go:89] found id: ""
	I1217 02:00:26.089046 1329399 logs.go:282] 0 containers: []
	W1217 02:00:26.089055 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:26.089062 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:26.089122 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:26.115271 1329399 cri.go:89] found id: ""
	I1217 02:00:26.115299 1329399 logs.go:282] 0 containers: []
	W1217 02:00:26.115308 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:26.115317 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:26.115331 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:26.185864 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:26.185899 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:26.203959 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:26.203990 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:26.288680 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:26.288703 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:26.288717 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:26.325839 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:26.325881 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:28.855585 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:28.866346 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:28.866422 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:28.895576 1329399 cri.go:89] found id: ""
	I1217 02:00:28.895599 1329399 logs.go:282] 0 containers: []
	W1217 02:00:28.895608 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:28.895614 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:28.895677 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:28.925928 1329399 cri.go:89] found id: ""
	I1217 02:00:28.925955 1329399 logs.go:282] 0 containers: []
	W1217 02:00:28.925965 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:28.925971 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:28.926030 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:28.952622 1329399 cri.go:89] found id: ""
	I1217 02:00:28.952649 1329399 logs.go:282] 0 containers: []
	W1217 02:00:28.952659 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:28.952666 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:28.952730 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:28.979009 1329399 cri.go:89] found id: ""
	I1217 02:00:28.979036 1329399 logs.go:282] 0 containers: []
	W1217 02:00:28.979045 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:28.979052 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:28.979110 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:29.019342 1329399 cri.go:89] found id: ""
	I1217 02:00:29.019369 1329399 logs.go:282] 0 containers: []
	W1217 02:00:29.019378 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:29.019391 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:29.019454 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:29.049190 1329399 cri.go:89] found id: ""
	I1217 02:00:29.049217 1329399 logs.go:282] 0 containers: []
	W1217 02:00:29.049226 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:29.049233 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:29.049292 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:29.079371 1329399 cri.go:89] found id: ""
	I1217 02:00:29.079398 1329399 logs.go:282] 0 containers: []
	W1217 02:00:29.079407 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:29.079413 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:29.079505 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:29.109605 1329399 cri.go:89] found id: ""
	I1217 02:00:29.109632 1329399 logs.go:282] 0 containers: []
	W1217 02:00:29.109641 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:29.109651 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:29.109669 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:29.177666 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:29.177706 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:29.195833 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:29.195865 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:29.280234 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:29.280257 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:29.280271 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:29.314464 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:29.314496 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:31.844914 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:31.854942 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:31.855014 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:31.880805 1329399 cri.go:89] found id: ""
	I1217 02:00:31.880831 1329399 logs.go:282] 0 containers: []
	W1217 02:00:31.880840 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:31.880847 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:31.880905 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:31.906966 1329399 cri.go:89] found id: ""
	I1217 02:00:31.906993 1329399 logs.go:282] 0 containers: []
	W1217 02:00:31.907002 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:31.907008 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:31.907068 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:31.932794 1329399 cri.go:89] found id: ""
	I1217 02:00:31.932822 1329399 logs.go:282] 0 containers: []
	W1217 02:00:31.932831 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:31.932839 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:31.932898 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:31.963891 1329399 cri.go:89] found id: ""
	I1217 02:00:31.963918 1329399 logs.go:282] 0 containers: []
	W1217 02:00:31.963927 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:31.963934 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:31.963995 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:31.989932 1329399 cri.go:89] found id: ""
	I1217 02:00:31.989959 1329399 logs.go:282] 0 containers: []
	W1217 02:00:31.989968 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:31.989975 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:31.990040 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:32.020477 1329399 cri.go:89] found id: ""
	I1217 02:00:32.020511 1329399 logs.go:282] 0 containers: []
	W1217 02:00:32.020521 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:32.020528 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:32.020589 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:32.045806 1329399 cri.go:89] found id: ""
	I1217 02:00:32.045833 1329399 logs.go:282] 0 containers: []
	W1217 02:00:32.045843 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:32.045850 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:32.045923 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:32.073457 1329399 cri.go:89] found id: ""
	I1217 02:00:32.073485 1329399 logs.go:282] 0 containers: []
	W1217 02:00:32.073494 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:32.073504 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:32.073546 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:32.141049 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:32.141089 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:32.159946 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:32.159980 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:32.226727 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:32.226792 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:32.226817 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:32.258984 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:32.259021 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:34.800677 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:34.811309 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:34.811381 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:34.837054 1329399 cri.go:89] found id: ""
	I1217 02:00:34.837084 1329399 logs.go:282] 0 containers: []
	W1217 02:00:34.837093 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:34.837099 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:34.837175 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:34.866561 1329399 cri.go:89] found id: ""
	I1217 02:00:34.866588 1329399 logs.go:282] 0 containers: []
	W1217 02:00:34.866597 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:34.866604 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:34.866663 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:34.894297 1329399 cri.go:89] found id: ""
	I1217 02:00:34.894328 1329399 logs.go:282] 0 containers: []
	W1217 02:00:34.894338 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:34.894345 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:34.894445 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:34.924103 1329399 cri.go:89] found id: ""
	I1217 02:00:34.924128 1329399 logs.go:282] 0 containers: []
	W1217 02:00:34.924138 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:34.924146 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:34.924203 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:34.951771 1329399 cri.go:89] found id: ""
	I1217 02:00:34.951798 1329399 logs.go:282] 0 containers: []
	W1217 02:00:34.951808 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:34.951814 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:34.951873 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:34.979940 1329399 cri.go:89] found id: ""
	I1217 02:00:34.980023 1329399 logs.go:282] 0 containers: []
	W1217 02:00:34.980047 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:34.980090 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:34.980197 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:35.011546 1329399 cri.go:89] found id: ""
	I1217 02:00:35.011575 1329399 logs.go:282] 0 containers: []
	W1217 02:00:35.011585 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:35.011593 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:35.011662 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:35.040229 1329399 cri.go:89] found id: ""
	I1217 02:00:35.040257 1329399 logs.go:282] 0 containers: []
	W1217 02:00:35.040267 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:35.040278 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:35.040290 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:35.119451 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:35.119505 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:35.138334 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:35.138371 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:35.209041 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:35.209066 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:35.209080 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:35.241103 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:35.241141 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:37.777964 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:37.790904 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:37.790977 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:37.818098 1329399 cri.go:89] found id: ""
	I1217 02:00:37.818124 1329399 logs.go:282] 0 containers: []
	W1217 02:00:37.818133 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:37.818139 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:37.818197 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:37.844164 1329399 cri.go:89] found id: ""
	I1217 02:00:37.844190 1329399 logs.go:282] 0 containers: []
	W1217 02:00:37.844199 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:37.844205 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:37.844263 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:37.870767 1329399 cri.go:89] found id: ""
	I1217 02:00:37.870794 1329399 logs.go:282] 0 containers: []
	W1217 02:00:37.870803 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:37.870811 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:37.870868 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:37.896480 1329399 cri.go:89] found id: ""
	I1217 02:00:37.896511 1329399 logs.go:282] 0 containers: []
	W1217 02:00:37.896521 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:37.896528 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:37.896592 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:37.927548 1329399 cri.go:89] found id: ""
	I1217 02:00:37.927576 1329399 logs.go:282] 0 containers: []
	W1217 02:00:37.927587 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:37.927593 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:37.927659 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:37.953975 1329399 cri.go:89] found id: ""
	I1217 02:00:37.954002 1329399 logs.go:282] 0 containers: []
	W1217 02:00:37.954012 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:37.954019 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:37.954079 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:37.983014 1329399 cri.go:89] found id: ""
	I1217 02:00:37.983039 1329399 logs.go:282] 0 containers: []
	W1217 02:00:37.983048 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:37.983054 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:37.983114 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:38.011098 1329399 cri.go:89] found id: ""
	I1217 02:00:38.011125 1329399 logs.go:282] 0 containers: []
	W1217 02:00:38.011135 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:38.011144 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:38.011156 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:38.085090 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:38.085130 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:38.103612 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:38.103640 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:38.175209 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:38.175232 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:38.175244 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:38.207662 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:38.207701 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:40.745794 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:40.757851 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:40.757921 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:40.807472 1329399 cri.go:89] found id: ""
	I1217 02:00:40.807500 1329399 logs.go:282] 0 containers: []
	W1217 02:00:40.807509 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:40.807516 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:40.807575 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:40.848628 1329399 cri.go:89] found id: ""
	I1217 02:00:40.848650 1329399 logs.go:282] 0 containers: []
	W1217 02:00:40.848659 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:40.848665 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:40.848727 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:40.879839 1329399 cri.go:89] found id: ""
	I1217 02:00:40.879861 1329399 logs.go:282] 0 containers: []
	W1217 02:00:40.879870 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:40.879876 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:40.879938 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:40.910799 1329399 cri.go:89] found id: ""
	I1217 02:00:40.910822 1329399 logs.go:282] 0 containers: []
	W1217 02:00:40.910830 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:40.910837 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:40.910896 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:40.939704 1329399 cri.go:89] found id: ""
	I1217 02:00:40.939726 1329399 logs.go:282] 0 containers: []
	W1217 02:00:40.939734 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:40.939741 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:40.939800 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:40.966989 1329399 cri.go:89] found id: ""
	I1217 02:00:40.967011 1329399 logs.go:282] 0 containers: []
	W1217 02:00:40.967020 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:40.967027 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:40.967083 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:40.992981 1329399 cri.go:89] found id: ""
	I1217 02:00:40.993008 1329399 logs.go:282] 0 containers: []
	W1217 02:00:40.993018 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:40.993025 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:40.993085 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:41.018828 1329399 cri.go:89] found id: ""
	I1217 02:00:41.018854 1329399 logs.go:282] 0 containers: []
	W1217 02:00:41.018864 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:41.018873 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:41.018884 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:41.088318 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:41.088357 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:41.106453 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:41.106485 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:41.171326 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:41.171348 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:41.171361 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:41.203202 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:41.203237 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:43.733654 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:43.743677 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:43.743744 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:43.780125 1329399 cri.go:89] found id: ""
	I1217 02:00:43.780145 1329399 logs.go:282] 0 containers: []
	W1217 02:00:43.780153 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:43.780159 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:43.780214 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:43.810410 1329399 cri.go:89] found id: ""
	I1217 02:00:43.810431 1329399 logs.go:282] 0 containers: []
	W1217 02:00:43.810440 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:43.810446 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:43.810502 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:43.838990 1329399 cri.go:89] found id: ""
	I1217 02:00:43.839012 1329399 logs.go:282] 0 containers: []
	W1217 02:00:43.839020 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:43.839026 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:43.839085 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:43.865356 1329399 cri.go:89] found id: ""
	I1217 02:00:43.865377 1329399 logs.go:282] 0 containers: []
	W1217 02:00:43.865385 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:43.865392 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:43.865450 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:43.893423 1329399 cri.go:89] found id: ""
	I1217 02:00:43.893501 1329399 logs.go:282] 0 containers: []
	W1217 02:00:43.893524 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:43.893564 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:43.893658 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:43.930475 1329399 cri.go:89] found id: ""
	I1217 02:00:43.930496 1329399 logs.go:282] 0 containers: []
	W1217 02:00:43.930504 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:43.930511 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:43.930564 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:43.966387 1329399 cri.go:89] found id: ""
	I1217 02:00:43.966408 1329399 logs.go:282] 0 containers: []
	W1217 02:00:43.966416 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:43.966422 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:43.966480 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:44.007021 1329399 cri.go:89] found id: ""
	I1217 02:00:44.007044 1329399 logs.go:282] 0 containers: []
	W1217 02:00:44.007053 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:44.007061 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:44.007078 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:44.092275 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:44.092354 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:44.119751 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:44.119864 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:44.211219 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:44.211244 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:44.211257 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:44.245096 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:44.245134 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:46.822038 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:46.832279 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:46.832348 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:46.859649 1329399 cri.go:89] found id: ""
	I1217 02:00:46.859675 1329399 logs.go:282] 0 containers: []
	W1217 02:00:46.859684 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:46.859691 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:46.859749 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:46.900900 1329399 cri.go:89] found id: ""
	I1217 02:00:46.900927 1329399 logs.go:282] 0 containers: []
	W1217 02:00:46.900936 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:46.900943 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:46.901001 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:46.939254 1329399 cri.go:89] found id: ""
	I1217 02:00:46.939281 1329399 logs.go:282] 0 containers: []
	W1217 02:00:46.939290 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:46.939297 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:46.939354 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:46.977322 1329399 cri.go:89] found id: ""
	I1217 02:00:46.977354 1329399 logs.go:282] 0 containers: []
	W1217 02:00:46.977363 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:46.977370 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:46.977434 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:47.017393 1329399 cri.go:89] found id: ""
	I1217 02:00:47.017420 1329399 logs.go:282] 0 containers: []
	W1217 02:00:47.017431 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:47.017437 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:47.017498 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:47.055429 1329399 cri.go:89] found id: ""
	I1217 02:00:47.055455 1329399 logs.go:282] 0 containers: []
	W1217 02:00:47.055464 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:47.055471 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:47.055531 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:47.098333 1329399 cri.go:89] found id: ""
	I1217 02:00:47.098359 1329399 logs.go:282] 0 containers: []
	W1217 02:00:47.098368 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:47.098375 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:47.098431 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:47.128215 1329399 cri.go:89] found id: ""
	I1217 02:00:47.128240 1329399 logs.go:282] 0 containers: []
	W1217 02:00:47.128249 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:47.128259 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:47.128270 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:47.164709 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:47.164743 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:47.205396 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:47.205425 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:47.293520 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:47.293562 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:47.327038 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:47.327073 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:47.451643 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:49.953296 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:49.966149 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:49.966222 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:49.992723 1329399 cri.go:89] found id: ""
	I1217 02:00:49.992749 1329399 logs.go:282] 0 containers: []
	W1217 02:00:49.992758 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:49.992773 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:49.992832 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:50.028007 1329399 cri.go:89] found id: ""
	I1217 02:00:50.028036 1329399 logs.go:282] 0 containers: []
	W1217 02:00:50.028046 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:50.028054 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:50.028118 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:50.060350 1329399 cri.go:89] found id: ""
	I1217 02:00:50.060375 1329399 logs.go:282] 0 containers: []
	W1217 02:00:50.060385 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:50.060392 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:50.060494 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:50.092057 1329399 cri.go:89] found id: ""
	I1217 02:00:50.092089 1329399 logs.go:282] 0 containers: []
	W1217 02:00:50.092098 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:50.092105 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:50.092163 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:50.120620 1329399 cri.go:89] found id: ""
	I1217 02:00:50.120646 1329399 logs.go:282] 0 containers: []
	W1217 02:00:50.120667 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:50.120673 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:50.120735 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:50.146955 1329399 cri.go:89] found id: ""
	I1217 02:00:50.146981 1329399 logs.go:282] 0 containers: []
	W1217 02:00:50.146990 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:50.146997 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:50.147056 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:50.177114 1329399 cri.go:89] found id: ""
	I1217 02:00:50.177141 1329399 logs.go:282] 0 containers: []
	W1217 02:00:50.177150 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:50.177157 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:50.177224 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:50.207491 1329399 cri.go:89] found id: ""
	I1217 02:00:50.207517 1329399 logs.go:282] 0 containers: []
	W1217 02:00:50.207527 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:50.207537 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:50.207548 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:50.276853 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:50.276937 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:50.295486 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:50.295645 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:50.371375 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:50.371398 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:50.371411 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:50.405501 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:50.405532 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:52.945969 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:52.960989 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:52.961057 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:52.999245 1329399 cri.go:89] found id: ""
	I1217 02:00:52.999268 1329399 logs.go:282] 0 containers: []
	W1217 02:00:52.999276 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:52.999283 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:52.999345 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:53.046880 1329399 cri.go:89] found id: ""
	I1217 02:00:53.046901 1329399 logs.go:282] 0 containers: []
	W1217 02:00:53.046909 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:53.046915 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:53.046974 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:53.079412 1329399 cri.go:89] found id: ""
	I1217 02:00:53.079434 1329399 logs.go:282] 0 containers: []
	W1217 02:00:53.079442 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:53.079449 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:53.079506 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:53.112704 1329399 cri.go:89] found id: ""
	I1217 02:00:53.112726 1329399 logs.go:282] 0 containers: []
	W1217 02:00:53.112734 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:53.112740 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:53.112815 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:53.151250 1329399 cri.go:89] found id: ""
	I1217 02:00:53.151300 1329399 logs.go:282] 0 containers: []
	W1217 02:00:53.151309 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:53.151316 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:53.151401 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:53.190658 1329399 cri.go:89] found id: ""
	I1217 02:00:53.190689 1329399 logs.go:282] 0 containers: []
	W1217 02:00:53.190698 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:53.190715 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:53.190809 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:53.234760 1329399 cri.go:89] found id: ""
	I1217 02:00:53.234781 1329399 logs.go:282] 0 containers: []
	W1217 02:00:53.234790 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:53.234796 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:53.234854 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:53.280369 1329399 cri.go:89] found id: ""
	I1217 02:00:53.280393 1329399 logs.go:282] 0 containers: []
	W1217 02:00:53.280401 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:53.280426 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:53.280457 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:53.385903 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:53.385945 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:53.411712 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:53.411740 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:53.495709 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:53.495774 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:53.495803 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:53.530339 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:53.530375 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:56.069153 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:56.079996 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:56.080067 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:56.107320 1329399 cri.go:89] found id: ""
	I1217 02:00:56.107344 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.107358 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:56.107364 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:56.107427 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:56.133091 1329399 cri.go:89] found id: ""
	I1217 02:00:56.133114 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.133122 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:56.133128 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:56.133186 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:56.159026 1329399 cri.go:89] found id: ""
	I1217 02:00:56.159050 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.159059 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:56.159066 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:56.159131 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:56.186240 1329399 cri.go:89] found id: ""
	I1217 02:00:56.186311 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.186338 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:56.186354 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:56.186429 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:56.212564 1329399 cri.go:89] found id: ""
	I1217 02:00:56.212590 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.212599 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:56.212605 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:56.212683 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:56.238424 1329399 cri.go:89] found id: ""
	I1217 02:00:56.238459 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.238469 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:56.238476 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:56.238545 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:56.263891 1329399 cri.go:89] found id: ""
	I1217 02:00:56.263917 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.263927 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:56.263934 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:56.264047 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:56.293497 1329399 cri.go:89] found id: ""
	I1217 02:00:56.293521 1329399 logs.go:282] 0 containers: []
	W1217 02:00:56.293529 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:56.293538 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:56.293554 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:56.363225 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:56.363265 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:56.380821 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:56.380851 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:56.443734 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:00:56.443755 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:56.443768 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:56.474588 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:56.474625 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:59.015387 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:00:59.027446 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:00:59.027521 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:00:59.063606 1329399 cri.go:89] found id: ""
	I1217 02:00:59.063635 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.063645 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:00:59.063651 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:00:59.063713 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:00:59.095688 1329399 cri.go:89] found id: ""
	I1217 02:00:59.095713 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.095721 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:00:59.095727 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:00:59.095782 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:00:59.133792 1329399 cri.go:89] found id: ""
	I1217 02:00:59.133818 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.133827 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:00:59.133834 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:00:59.133898 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:00:59.164724 1329399 cri.go:89] found id: ""
	I1217 02:00:59.164750 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.164759 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:00:59.164765 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:00:59.164824 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:00:59.195513 1329399 cri.go:89] found id: ""
	I1217 02:00:59.195539 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.195548 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:00:59.195555 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:00:59.195614 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:00:59.228089 1329399 cri.go:89] found id: ""
	I1217 02:00:59.228126 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.228149 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:00:59.228156 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:00:59.228235 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:00:59.280356 1329399 cri.go:89] found id: ""
	I1217 02:00:59.280388 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.280397 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:00:59.280410 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:00:59.280494 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:00:59.319746 1329399 cri.go:89] found id: ""
	I1217 02:00:59.319774 1329399 logs.go:282] 0 containers: []
	W1217 02:00:59.319784 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:00:59.319793 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:00:59.319806 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:00:59.355083 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:00:59.355135 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:00:59.408629 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:00:59.408660 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:00:59.495614 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:00:59.495657 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:00:59.519284 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:00:59.519319 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:00:59.657811 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:02.158871 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:02.169730 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:02.169806 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:02.198282 1329399 cri.go:89] found id: ""
	I1217 02:01:02.198309 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.198320 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:02.198326 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:02.198387 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:02.226491 1329399 cri.go:89] found id: ""
	I1217 02:01:02.226514 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.226523 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:02.226542 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:02.226600 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:02.253505 1329399 cri.go:89] found id: ""
	I1217 02:01:02.253525 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.253534 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:02.253540 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:02.253603 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:02.280280 1329399 cri.go:89] found id: ""
	I1217 02:01:02.280308 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.280317 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:02.280324 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:02.280383 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:02.308132 1329399 cri.go:89] found id: ""
	I1217 02:01:02.308160 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.308169 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:02.308176 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:02.308235 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:02.333850 1329399 cri.go:89] found id: ""
	I1217 02:01:02.333876 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.333886 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:02.333893 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:02.333968 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:02.360161 1329399 cri.go:89] found id: ""
	I1217 02:01:02.360187 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.360197 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:02.360203 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:02.360288 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:02.386210 1329399 cri.go:89] found id: ""
	I1217 02:01:02.386233 1329399 logs.go:282] 0 containers: []
	W1217 02:01:02.386242 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:02.386251 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:02.386263 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:02.403745 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:02.403778 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:02.472141 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:02.472161 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:02.472174 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:02.503720 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:02.503756 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:02.537836 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:02.537864 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:05.109387 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:05.119896 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:05.120015 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:05.150502 1329399 cri.go:89] found id: ""
	I1217 02:01:05.150529 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.150539 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:05.150546 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:05.150649 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:05.180960 1329399 cri.go:89] found id: ""
	I1217 02:01:05.180986 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.180996 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:05.181003 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:05.181068 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:05.207416 1329399 cri.go:89] found id: ""
	I1217 02:01:05.207496 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.207519 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:05.207540 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:05.207634 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:05.236486 1329399 cri.go:89] found id: ""
	I1217 02:01:05.236564 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.236581 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:05.236589 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:05.236658 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:05.268737 1329399 cri.go:89] found id: ""
	I1217 02:01:05.268832 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.268857 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:05.268878 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:05.268969 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:05.295499 1329399 cri.go:89] found id: ""
	I1217 02:01:05.295577 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.295602 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:05.295625 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:05.295702 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:05.327333 1329399 cri.go:89] found id: ""
	I1217 02:01:05.327408 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.327432 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:05.327455 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:05.327537 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:05.355677 1329399 cri.go:89] found id: ""
	I1217 02:01:05.355703 1329399 logs.go:282] 0 containers: []
	W1217 02:01:05.355713 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:05.355723 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:05.355735 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:05.423688 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:05.423730 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:05.442592 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:05.442624 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:05.514568 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:05.514603 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:05.514618 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:05.549981 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:05.550025 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:08.082693 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:08.093407 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:08.093483 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:08.123768 1329399 cri.go:89] found id: ""
	I1217 02:01:08.123804 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.123813 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:08.123820 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:08.123928 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:08.149447 1329399 cri.go:89] found id: ""
	I1217 02:01:08.149473 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.149482 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:08.149488 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:08.149548 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:08.176219 1329399 cri.go:89] found id: ""
	I1217 02:01:08.176253 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.176263 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:08.176270 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:08.176361 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:08.207605 1329399 cri.go:89] found id: ""
	I1217 02:01:08.207629 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.207637 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:08.207644 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:08.207702 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:08.234013 1329399 cri.go:89] found id: ""
	I1217 02:01:08.234036 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.234045 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:08.234051 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:08.234113 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:08.260049 1329399 cri.go:89] found id: ""
	I1217 02:01:08.260078 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.260087 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:08.260094 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:08.260160 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:08.286587 1329399 cri.go:89] found id: ""
	I1217 02:01:08.286613 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.286622 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:08.286629 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:08.286703 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:08.313295 1329399 cri.go:89] found id: ""
	I1217 02:01:08.313322 1329399 logs.go:282] 0 containers: []
	W1217 02:01:08.313331 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:08.313341 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:08.313354 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:08.345256 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:08.345295 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:08.384022 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:08.384050 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:08.454932 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:08.454966 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:08.473105 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:08.473140 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:08.543870 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:11.044543 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:11.055683 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:11.055754 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:11.084593 1329399 cri.go:89] found id: ""
	I1217 02:01:11.084620 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.084630 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:11.084638 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:11.084703 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:11.112472 1329399 cri.go:89] found id: ""
	I1217 02:01:11.112497 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.112506 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:11.112512 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:11.112582 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:11.140086 1329399 cri.go:89] found id: ""
	I1217 02:01:11.140111 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.140120 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:11.140127 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:11.140191 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:11.167910 1329399 cri.go:89] found id: ""
	I1217 02:01:11.167938 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.167948 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:11.167955 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:11.168018 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:11.201931 1329399 cri.go:89] found id: ""
	I1217 02:01:11.201958 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.201967 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:11.201974 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:11.202033 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:11.228465 1329399 cri.go:89] found id: ""
	I1217 02:01:11.228490 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.228500 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:11.228507 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:11.228577 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:11.259569 1329399 cri.go:89] found id: ""
	I1217 02:01:11.259595 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.259605 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:11.259612 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:11.259673 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:11.285673 1329399 cri.go:89] found id: ""
	I1217 02:01:11.285750 1329399 logs.go:282] 0 containers: []
	W1217 02:01:11.285767 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:11.285777 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:11.285792 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:11.355937 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:11.355974 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:11.373750 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:11.373779 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:11.440144 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:11.440167 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:11.440180 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:11.471848 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:11.471882 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:14.001390 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:14.014765 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:14.014835 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:14.045766 1329399 cri.go:89] found id: ""
	I1217 02:01:14.045788 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.045797 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:14.045804 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:14.045863 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:14.076389 1329399 cri.go:89] found id: ""
	I1217 02:01:14.076412 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.076445 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:14.076452 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:14.076523 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:14.104093 1329399 cri.go:89] found id: ""
	I1217 02:01:14.104119 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.104128 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:14.104134 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:14.104190 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:14.129946 1329399 cri.go:89] found id: ""
	I1217 02:01:14.129973 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.129982 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:14.129989 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:14.130049 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:14.155276 1329399 cri.go:89] found id: ""
	I1217 02:01:14.155303 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.155312 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:14.155319 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:14.155377 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:14.183554 1329399 cri.go:89] found id: ""
	I1217 02:01:14.183578 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.183596 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:14.183603 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:14.183666 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:14.214629 1329399 cri.go:89] found id: ""
	I1217 02:01:14.214654 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.214664 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:14.214670 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:14.214728 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:14.243002 1329399 cri.go:89] found id: ""
	I1217 02:01:14.243026 1329399 logs.go:282] 0 containers: []
	W1217 02:01:14.243036 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:14.243045 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:14.243059 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:14.274912 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:14.274953 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:14.302289 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:14.302319 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:14.375347 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:14.375385 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:14.393354 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:14.393387 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:14.460633 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:16.960992 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:16.976936 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:16.977011 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:17.055199 1329399 cri.go:89] found id: ""
	I1217 02:01:17.055221 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.055230 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:17.055237 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:17.055303 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:17.127290 1329399 cri.go:89] found id: ""
	I1217 02:01:17.127313 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.127322 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:17.127328 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:17.127388 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:17.154592 1329399 cri.go:89] found id: ""
	I1217 02:01:17.154618 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.154627 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:17.154634 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:17.154696 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:17.181025 1329399 cri.go:89] found id: ""
	I1217 02:01:17.181052 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.181062 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:17.181075 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:17.181135 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:17.207604 1329399 cri.go:89] found id: ""
	I1217 02:01:17.207628 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.207644 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:17.207652 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:17.207713 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:17.236092 1329399 cri.go:89] found id: ""
	I1217 02:01:17.236119 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.236129 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:17.236136 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:17.236195 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:17.263697 1329399 cri.go:89] found id: ""
	I1217 02:01:17.263724 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.263733 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:17.263740 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:17.263802 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:17.291669 1329399 cri.go:89] found id: ""
	I1217 02:01:17.291692 1329399 logs.go:282] 0 containers: []
	W1217 02:01:17.291705 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:17.291714 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:17.291726 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:17.310325 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:17.310357 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:17.376351 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:17.376370 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:17.376383 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:17.408240 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:17.408278 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:17.438164 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:17.438194 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:20.010242 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:20.033944 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:20.034029 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:20.110785 1329399 cri.go:89] found id: ""
	I1217 02:01:20.110808 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.110817 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:20.110823 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:20.110889 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:20.162807 1329399 cri.go:89] found id: ""
	I1217 02:01:20.162829 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.162839 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:20.162845 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:20.162906 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:20.203334 1329399 cri.go:89] found id: ""
	I1217 02:01:20.203357 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.203366 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:20.203372 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:20.203436 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:20.249028 1329399 cri.go:89] found id: ""
	I1217 02:01:20.249103 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.249115 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:20.249150 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:20.249244 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:20.290676 1329399 cri.go:89] found id: ""
	I1217 02:01:20.290749 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.290762 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:20.290768 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:20.290884 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:20.340296 1329399 cri.go:89] found id: ""
	I1217 02:01:20.340398 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.340487 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:20.340498 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:20.340658 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:20.371560 1329399 cri.go:89] found id: ""
	I1217 02:01:20.371635 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.371663 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:20.371683 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:20.371819 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:20.408209 1329399 cri.go:89] found id: ""
	I1217 02:01:20.408287 1329399 logs.go:282] 0 containers: []
	W1217 02:01:20.408313 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:20.408338 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:20.408383 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:20.443056 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:20.443086 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:20.491496 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:20.491521 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:20.573815 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:20.574192 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:20.593523 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:20.593657 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:20.670564 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:23.171083 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:23.187567 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:23.187654 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:23.220454 1329399 cri.go:89] found id: ""
	I1217 02:01:23.220477 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.220485 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:23.220491 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:23.220551 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:23.258093 1329399 cri.go:89] found id: ""
	I1217 02:01:23.258114 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.258123 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:23.258129 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:23.258187 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:23.302892 1329399 cri.go:89] found id: ""
	I1217 02:01:23.302914 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.302923 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:23.302929 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:23.302987 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:23.347042 1329399 cri.go:89] found id: ""
	I1217 02:01:23.347065 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.347073 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:23.347080 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:23.347146 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:23.379193 1329399 cri.go:89] found id: ""
	I1217 02:01:23.379267 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.379291 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:23.379316 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:23.379406 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:23.410726 1329399 cri.go:89] found id: ""
	I1217 02:01:23.410800 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.410823 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:23.410847 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:23.410939 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:23.445996 1329399 cri.go:89] found id: ""
	I1217 02:01:23.446068 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.446092 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:23.446113 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:23.446200 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:23.487453 1329399 cri.go:89] found id: ""
	I1217 02:01:23.487480 1329399 logs.go:282] 0 containers: []
	W1217 02:01:23.487489 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:23.487519 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:23.487535 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:23.570757 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:23.570824 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:23.591602 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:23.591694 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:23.683378 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:23.683447 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:23.683480 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:23.718788 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:23.718819 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:26.293121 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:26.303687 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:26.303761 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:26.329652 1329399 cri.go:89] found id: ""
	I1217 02:01:26.329679 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.329689 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:26.329696 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:26.329753 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:26.358849 1329399 cri.go:89] found id: ""
	I1217 02:01:26.358876 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.358886 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:26.358892 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:26.358951 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:26.388029 1329399 cri.go:89] found id: ""
	I1217 02:01:26.388056 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.388066 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:26.388072 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:26.388135 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:26.413719 1329399 cri.go:89] found id: ""
	I1217 02:01:26.413746 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.413755 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:26.413762 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:26.413818 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:26.439108 1329399 cri.go:89] found id: ""
	I1217 02:01:26.439134 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.439144 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:26.439151 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:26.439208 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:26.467745 1329399 cri.go:89] found id: ""
	I1217 02:01:26.467769 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.467778 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:26.467785 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:26.467845 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:26.494413 1329399 cri.go:89] found id: ""
	I1217 02:01:26.494436 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.494445 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:26.494451 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:26.494509 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:26.525192 1329399 cri.go:89] found id: ""
	I1217 02:01:26.525219 1329399 logs.go:282] 0 containers: []
	W1217 02:01:26.525229 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:26.525239 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:26.525253 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:26.543110 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:26.543142 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:26.612546 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:26.612631 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:26.612653 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:26.644022 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:26.644057 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:26.683742 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:26.683769 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:29.261316 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:29.272281 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:29.272354 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:29.298886 1329399 cri.go:89] found id: ""
	I1217 02:01:29.298912 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.298922 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:29.298928 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:29.298989 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:29.325014 1329399 cri.go:89] found id: ""
	I1217 02:01:29.325040 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.325050 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:29.325056 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:29.325118 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:29.351506 1329399 cri.go:89] found id: ""
	I1217 02:01:29.351531 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.351540 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:29.351546 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:29.351606 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:29.378974 1329399 cri.go:89] found id: ""
	I1217 02:01:29.378997 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.379006 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:29.379013 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:29.379076 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:29.405446 1329399 cri.go:89] found id: ""
	I1217 02:01:29.405471 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.405481 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:29.405487 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:29.405556 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:29.437506 1329399 cri.go:89] found id: ""
	I1217 02:01:29.437535 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.437545 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:29.437552 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:29.437615 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:29.467804 1329399 cri.go:89] found id: ""
	I1217 02:01:29.467828 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.467837 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:29.467843 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:29.467904 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:29.497451 1329399 cri.go:89] found id: ""
	I1217 02:01:29.497480 1329399 logs.go:282] 0 containers: []
	W1217 02:01:29.497490 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:29.497500 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:29.497513 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:29.528428 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:29.528465 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:29.559007 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:29.559038 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:29.627833 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:29.627870 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:29.645928 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:29.645959 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:29.710743 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:32.211030 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:32.221443 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:32.221515 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:32.247803 1329399 cri.go:89] found id: ""
	I1217 02:01:32.247829 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.247838 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:32.247845 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:32.247913 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:32.278039 1329399 cri.go:89] found id: ""
	I1217 02:01:32.278070 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.278080 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:32.278088 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:32.278149 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:32.303578 1329399 cri.go:89] found id: ""
	I1217 02:01:32.303605 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.303620 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:32.303627 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:32.303683 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:32.333379 1329399 cri.go:89] found id: ""
	I1217 02:01:32.333405 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.333414 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:32.333421 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:32.333482 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:32.358560 1329399 cri.go:89] found id: ""
	I1217 02:01:32.358628 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.358643 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:32.358650 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:32.358714 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:32.383788 1329399 cri.go:89] found id: ""
	I1217 02:01:32.383810 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.383822 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:32.383829 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:32.383888 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:32.409761 1329399 cri.go:89] found id: ""
	I1217 02:01:32.409787 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.409796 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:32.409803 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:32.409860 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:32.434414 1329399 cri.go:89] found id: ""
	I1217 02:01:32.434441 1329399 logs.go:282] 0 containers: []
	W1217 02:01:32.434451 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:32.434460 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:32.434491 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:32.506739 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:32.506780 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:32.527967 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:32.528006 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:32.597506 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:32.597525 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:32.597541 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:32.629353 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:32.629387 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:35.163712 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:35.174199 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:35.174281 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:35.204885 1329399 cri.go:89] found id: ""
	I1217 02:01:35.204912 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.204922 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:35.204929 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:35.204990 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:35.229946 1329399 cri.go:89] found id: ""
	I1217 02:01:35.229976 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.229986 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:35.229992 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:35.230052 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:35.255670 1329399 cri.go:89] found id: ""
	I1217 02:01:35.255697 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.255706 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:35.255713 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:35.255773 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:35.282290 1329399 cri.go:89] found id: ""
	I1217 02:01:35.282316 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.282326 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:35.282333 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:35.282424 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:35.311209 1329399 cri.go:89] found id: ""
	I1217 02:01:35.311233 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.311243 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:35.311279 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:35.311360 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:35.340564 1329399 cri.go:89] found id: ""
	I1217 02:01:35.340598 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.340607 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:35.340614 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:35.340672 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:35.367140 1329399 cri.go:89] found id: ""
	I1217 02:01:35.367164 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.367173 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:35.367180 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:35.367251 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:35.394413 1329399 cri.go:89] found id: ""
	I1217 02:01:35.394438 1329399 logs.go:282] 0 containers: []
	W1217 02:01:35.394447 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:35.394456 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:35.394468 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:35.426289 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:35.426317 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:35.498331 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:35.498372 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:35.517058 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:35.517089 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:35.582632 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:35.582655 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:35.582668 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:38.113942 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:38.124850 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:38.124921 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:38.151760 1329399 cri.go:89] found id: ""
	I1217 02:01:38.151786 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.151796 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:38.151804 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:38.151889 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:38.177213 1329399 cri.go:89] found id: ""
	I1217 02:01:38.177238 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.177248 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:38.177254 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:38.177312 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:38.204530 1329399 cri.go:89] found id: ""
	I1217 02:01:38.204562 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.204571 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:38.204578 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:38.204649 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:38.230864 1329399 cri.go:89] found id: ""
	I1217 02:01:38.230892 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.230901 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:38.230908 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:38.230972 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:38.256851 1329399 cri.go:89] found id: ""
	I1217 02:01:38.256878 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.256887 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:38.256894 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:38.256955 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:38.283762 1329399 cri.go:89] found id: ""
	I1217 02:01:38.283788 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.283798 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:38.283811 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:38.283874 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:38.311049 1329399 cri.go:89] found id: ""
	I1217 02:01:38.311076 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.311085 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:38.311091 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:38.311148 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:38.337404 1329399 cri.go:89] found id: ""
	I1217 02:01:38.337430 1329399 logs.go:282] 0 containers: []
	W1217 02:01:38.337440 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:38.337449 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:38.337461 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:38.404790 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:38.404829 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:38.422876 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:38.422908 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:38.490554 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:38.490617 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:38.490647 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:38.521717 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:38.521753 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:41.053453 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:41.064239 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:41.064313 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:41.093057 1329399 cri.go:89] found id: ""
	I1217 02:01:41.093085 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.093094 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:41.093101 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:41.093181 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:41.119978 1329399 cri.go:89] found id: ""
	I1217 02:01:41.120005 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.120014 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:41.120021 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:41.120081 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:41.145550 1329399 cri.go:89] found id: ""
	I1217 02:01:41.145577 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.145586 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:41.145593 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:41.145649 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:41.170017 1329399 cri.go:89] found id: ""
	I1217 02:01:41.170043 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.170052 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:41.170059 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:41.170118 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:41.195389 1329399 cri.go:89] found id: ""
	I1217 02:01:41.195415 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.195424 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:41.195431 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:41.195488 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:41.225869 1329399 cri.go:89] found id: ""
	I1217 02:01:41.225892 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.225909 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:41.225915 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:41.225987 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:41.252508 1329399 cri.go:89] found id: ""
	I1217 02:01:41.252547 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.252557 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:41.252563 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:41.252652 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:41.278672 1329399 cri.go:89] found id: ""
	I1217 02:01:41.278695 1329399 logs.go:282] 0 containers: []
	W1217 02:01:41.278706 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:41.278715 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:41.278727 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:41.296659 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:41.296694 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:41.364548 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:41.364629 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:41.364658 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:41.395691 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:41.395727 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:41.432338 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:41.432371 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:44.001449 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:44.024638 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:44.024715 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:44.093111 1329399 cri.go:89] found id: ""
	I1217 02:01:44.093139 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.093148 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:44.093155 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:44.093212 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:44.139388 1329399 cri.go:89] found id: ""
	I1217 02:01:44.139411 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.139421 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:44.139428 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:44.139486 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:44.177420 1329399 cri.go:89] found id: ""
	I1217 02:01:44.177444 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.177453 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:44.177460 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:44.177517 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:44.215205 1329399 cri.go:89] found id: ""
	I1217 02:01:44.215227 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.215236 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:44.215243 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:44.215299 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:44.251812 1329399 cri.go:89] found id: ""
	I1217 02:01:44.251841 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.251851 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:44.251858 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:44.251918 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:44.294154 1329399 cri.go:89] found id: ""
	I1217 02:01:44.294178 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.294186 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:44.294193 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:44.294257 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:44.338841 1329399 cri.go:89] found id: ""
	I1217 02:01:44.338864 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.338873 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:44.338879 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:44.338936 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:44.370868 1329399 cri.go:89] found id: ""
	I1217 02:01:44.370890 1329399 logs.go:282] 0 containers: []
	W1217 02:01:44.370899 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:44.370908 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:44.370920 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:44.439540 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:44.439578 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:44.457778 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:44.457810 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:44.526477 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:44.526506 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:44.526525 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:44.557549 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:44.557594 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:47.088855 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:47.100551 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:47.100653 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:47.143507 1329399 cri.go:89] found id: ""
	I1217 02:01:47.143542 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.143551 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:47.143559 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:47.143651 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:47.175895 1329399 cri.go:89] found id: ""
	I1217 02:01:47.175933 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.175942 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:47.175949 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:47.176015 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:47.211608 1329399 cri.go:89] found id: ""
	I1217 02:01:47.211631 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.211641 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:47.211648 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:47.211712 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:47.243501 1329399 cri.go:89] found id: ""
	I1217 02:01:47.243538 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.243548 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:47.243555 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:47.243634 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:47.272264 1329399 cri.go:89] found id: ""
	I1217 02:01:47.272339 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.272363 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:47.272383 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:47.272507 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:47.306307 1329399 cri.go:89] found id: ""
	I1217 02:01:47.306381 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.306406 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:47.306427 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:47.306534 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:47.347955 1329399 cri.go:89] found id: ""
	I1217 02:01:47.347994 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.348011 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:47.348018 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:47.348094 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:47.382775 1329399 cri.go:89] found id: ""
	I1217 02:01:47.382796 1329399 logs.go:282] 0 containers: []
	W1217 02:01:47.382805 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:47.382814 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:47.382826 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:47.419246 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:47.419285 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:47.464082 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:47.464106 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:47.541523 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:47.541599 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:47.560737 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:47.560810 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:47.662232 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:50.163953 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:50.174760 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:50.174832 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:50.201003 1329399 cri.go:89] found id: ""
	I1217 02:01:50.201031 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.201040 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:50.201047 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:50.201107 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:50.231072 1329399 cri.go:89] found id: ""
	I1217 02:01:50.231095 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.231104 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:50.231110 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:50.231167 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:50.261475 1329399 cri.go:89] found id: ""
	I1217 02:01:50.261499 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.261507 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:50.261513 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:50.261573 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:50.287674 1329399 cri.go:89] found id: ""
	I1217 02:01:50.287696 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.287705 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:50.287712 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:50.287774 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:50.315976 1329399 cri.go:89] found id: ""
	I1217 02:01:50.315997 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.316006 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:50.316012 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:50.316074 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:50.340975 1329399 cri.go:89] found id: ""
	I1217 02:01:50.340997 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.341006 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:50.341070 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:50.341145 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:50.385832 1329399 cri.go:89] found id: ""
	I1217 02:01:50.385860 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.385869 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:50.385875 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:50.385949 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:50.423673 1329399 cri.go:89] found id: ""
	I1217 02:01:50.423697 1329399 logs.go:282] 0 containers: []
	W1217 02:01:50.423706 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:50.423715 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:50.423726 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:50.521545 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:50.521587 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:50.544370 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:50.544746 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:50.645163 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:50.645194 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:50.645207 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:50.683226 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:50.683265 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:53.214309 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:53.227096 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:53.227171 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:53.258140 1329399 cri.go:89] found id: ""
	I1217 02:01:53.258170 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.258179 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:53.258186 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:53.258246 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:53.286099 1329399 cri.go:89] found id: ""
	I1217 02:01:53.286126 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.286137 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:53.286144 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:53.286206 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:53.315440 1329399 cri.go:89] found id: ""
	I1217 02:01:53.315464 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.315473 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:53.315480 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:53.315548 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:53.340909 1329399 cri.go:89] found id: ""
	I1217 02:01:53.340934 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.340943 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:53.340949 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:53.341005 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:53.367058 1329399 cri.go:89] found id: ""
	I1217 02:01:53.367082 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.367091 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:53.367098 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:53.367157 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:53.393488 1329399 cri.go:89] found id: ""
	I1217 02:01:53.393513 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.393521 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:53.393527 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:53.393586 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:53.419264 1329399 cri.go:89] found id: ""
	I1217 02:01:53.419288 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.419296 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:53.419302 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:53.419359 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:53.445194 1329399 cri.go:89] found id: ""
	I1217 02:01:53.445224 1329399 logs.go:282] 0 containers: []
	W1217 02:01:53.445233 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:53.445241 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:53.445252 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:53.476481 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:53.476518 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:53.506354 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:53.506384 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:53.575100 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:53.575136 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:53.594205 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:53.594237 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:53.660877 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:56.162560 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:56.173375 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:56.173448 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:56.202077 1329399 cri.go:89] found id: ""
	I1217 02:01:56.202102 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.202111 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:56.202118 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:56.202175 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:56.227523 1329399 cri.go:89] found id: ""
	I1217 02:01:56.227546 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.227556 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:56.227562 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:56.227619 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:56.255001 1329399 cri.go:89] found id: ""
	I1217 02:01:56.255030 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.255040 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:56.255047 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:56.255105 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:56.280117 1329399 cri.go:89] found id: ""
	I1217 02:01:56.280145 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.280154 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:56.280161 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:56.280218 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:56.307679 1329399 cri.go:89] found id: ""
	I1217 02:01:56.307706 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.307716 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:56.307722 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:56.307780 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:56.333429 1329399 cri.go:89] found id: ""
	I1217 02:01:56.333453 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.333462 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:56.333468 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:56.333527 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:56.357997 1329399 cri.go:89] found id: ""
	I1217 02:01:56.358021 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.358030 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:56.358036 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:56.358091 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:56.383918 1329399 cri.go:89] found id: ""
	I1217 02:01:56.383981 1329399 logs.go:282] 0 containers: []
	W1217 02:01:56.384005 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:56.384027 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:56.384054 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:56.411364 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:56.411388 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:56.477692 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:56.477726 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:56.499686 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:56.499718 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:56.589156 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:56.589178 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:56.589190 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:01:59.135470 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:01:59.145567 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:01:59.145640 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:01:59.172535 1329399 cri.go:89] found id: ""
	I1217 02:01:59.172566 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.172576 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:01:59.172582 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:01:59.172661 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:01:59.198699 1329399 cri.go:89] found id: ""
	I1217 02:01:59.198729 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.198739 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:01:59.198745 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:01:59.198806 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:01:59.224368 1329399 cri.go:89] found id: ""
	I1217 02:01:59.224397 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.224407 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:01:59.224438 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:01:59.224498 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:01:59.249408 1329399 cri.go:89] found id: ""
	I1217 02:01:59.249436 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.249446 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:01:59.249453 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:01:59.249512 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:01:59.275852 1329399 cri.go:89] found id: ""
	I1217 02:01:59.275874 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.275883 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:01:59.275889 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:01:59.275949 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:01:59.302091 1329399 cri.go:89] found id: ""
	I1217 02:01:59.302114 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.302122 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:01:59.302129 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:01:59.302236 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:01:59.328891 1329399 cri.go:89] found id: ""
	I1217 02:01:59.328914 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.328923 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:01:59.328930 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:01:59.328989 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:01:59.358097 1329399 cri.go:89] found id: ""
	I1217 02:01:59.358121 1329399 logs.go:282] 0 containers: []
	W1217 02:01:59.358130 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:01:59.358139 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:01:59.358150 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:01:59.388038 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:01:59.388062 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:01:59.455134 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:01:59.455170 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:01:59.473020 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:01:59.473050 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:01:59.540401 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:01:59.540448 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:01:59.540462 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:02.071592 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:02.082234 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:02.082305 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:02.108788 1329399 cri.go:89] found id: ""
	I1217 02:02:02.108811 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.108819 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:02.108825 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:02.108883 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:02.139707 1329399 cri.go:89] found id: ""
	I1217 02:02:02.139734 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.139744 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:02.139750 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:02.139814 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:02.172313 1329399 cri.go:89] found id: ""
	I1217 02:02:02.172337 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.172346 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:02.172352 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:02.172428 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:02.209147 1329399 cri.go:89] found id: ""
	I1217 02:02:02.209170 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.209179 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:02.209186 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:02.209246 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:02.242597 1329399 cri.go:89] found id: ""
	I1217 02:02:02.242622 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.242630 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:02.242637 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:02.242695 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:02.277711 1329399 cri.go:89] found id: ""
	I1217 02:02:02.277748 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.277758 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:02.277766 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:02.277836 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:02.314871 1329399 cri.go:89] found id: ""
	I1217 02:02:02.314908 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.314918 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:02.314925 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:02.314991 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:02.342193 1329399 cri.go:89] found id: ""
	I1217 02:02:02.342220 1329399 logs.go:282] 0 containers: []
	W1217 02:02:02.342229 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:02.342239 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:02.342251 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:02.429253 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:02.429276 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:02.429289 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:02.468464 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:02.468500 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:02.509877 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:02.509954 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:02.587011 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:02.587071 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:05.112629 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:05.124159 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:05.124232 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:05.155480 1329399 cri.go:89] found id: ""
	I1217 02:02:05.155503 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.155511 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:05.155517 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:05.155578 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:05.187829 1329399 cri.go:89] found id: ""
	I1217 02:02:05.187852 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.187860 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:05.187866 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:05.187928 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:05.219053 1329399 cri.go:89] found id: ""
	I1217 02:02:05.219076 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.219085 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:05.219092 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:05.219152 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:05.254746 1329399 cri.go:89] found id: ""
	I1217 02:02:05.254768 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.254777 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:05.254785 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:05.254842 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:05.286937 1329399 cri.go:89] found id: ""
	I1217 02:02:05.286959 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.286969 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:05.286976 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:05.287039 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:05.331146 1329399 cri.go:89] found id: ""
	I1217 02:02:05.331169 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.331178 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:05.331190 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:05.331250 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:05.366031 1329399 cri.go:89] found id: ""
	I1217 02:02:05.366055 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.366071 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:05.366077 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:05.366134 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:05.396677 1329399 cri.go:89] found id: ""
	I1217 02:02:05.396700 1329399 logs.go:282] 0 containers: []
	W1217 02:02:05.396710 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:05.396720 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:05.396732 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:05.431969 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:05.431999 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:05.506270 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:05.506308 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:05.531499 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:05.531532 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:05.617151 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:05.617183 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:05.617195 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:08.150014 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:08.160836 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:08.160908 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:08.189260 1329399 cri.go:89] found id: ""
	I1217 02:02:08.189285 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.189295 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:08.189302 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:08.189361 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:08.214438 1329399 cri.go:89] found id: ""
	I1217 02:02:08.214460 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.214469 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:08.214475 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:08.214533 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:08.241775 1329399 cri.go:89] found id: ""
	I1217 02:02:08.241801 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.241809 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:08.241816 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:08.241875 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:08.272538 1329399 cri.go:89] found id: ""
	I1217 02:02:08.272562 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.272572 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:08.272579 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:08.272648 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:08.304739 1329399 cri.go:89] found id: ""
	I1217 02:02:08.304763 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.304773 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:08.304780 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:08.304843 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:08.331535 1329399 cri.go:89] found id: ""
	I1217 02:02:08.331565 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.331575 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:08.331581 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:08.331639 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:08.356205 1329399 cri.go:89] found id: ""
	I1217 02:02:08.356248 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.356261 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:08.356268 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:08.356329 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:08.381434 1329399 cri.go:89] found id: ""
	I1217 02:02:08.381462 1329399 logs.go:282] 0 containers: []
	W1217 02:02:08.381472 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:08.381481 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:08.381492 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:08.411579 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:08.411615 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:08.440257 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:08.440285 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:08.507417 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:08.507460 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:08.525719 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:08.525749 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:08.594030 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:11.094261 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:11.105126 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:11.105199 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:11.131239 1329399 cri.go:89] found id: ""
	I1217 02:02:11.131266 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.131275 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:11.131283 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:11.131342 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:11.162635 1329399 cri.go:89] found id: ""
	I1217 02:02:11.162663 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.162672 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:11.162679 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:11.162741 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:11.196348 1329399 cri.go:89] found id: ""
	I1217 02:02:11.196440 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.196465 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:11.196497 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:11.196578 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:11.223073 1329399 cri.go:89] found id: ""
	I1217 02:02:11.223100 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.223109 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:11.223116 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:11.223174 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:11.250516 1329399 cri.go:89] found id: ""
	I1217 02:02:11.250540 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.250549 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:11.250555 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:11.250617 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:11.281044 1329399 cri.go:89] found id: ""
	I1217 02:02:11.281067 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.281076 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:11.281083 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:11.281145 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:11.306618 1329399 cri.go:89] found id: ""
	I1217 02:02:11.306641 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.306651 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:11.306657 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:11.306716 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:11.333220 1329399 cri.go:89] found id: ""
	I1217 02:02:11.333260 1329399 logs.go:282] 0 containers: []
	W1217 02:02:11.333270 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:11.333281 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:11.333294 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:11.363775 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:11.363805 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:11.430471 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:11.430509 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:11.448948 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:11.448979 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:11.513941 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:11.513969 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:11.514004 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:14.047370 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:14.059737 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:14.059823 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:14.087527 1329399 cri.go:89] found id: ""
	I1217 02:02:14.087554 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.087563 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:14.087577 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:14.087638 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:14.113083 1329399 cri.go:89] found id: ""
	I1217 02:02:14.113106 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.113114 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:14.113122 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:14.113182 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:14.142868 1329399 cri.go:89] found id: ""
	I1217 02:02:14.142891 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.142900 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:14.142906 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:14.142968 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:14.173274 1329399 cri.go:89] found id: ""
	I1217 02:02:14.173297 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.173306 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:14.173312 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:14.173373 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:14.203394 1329399 cri.go:89] found id: ""
	I1217 02:02:14.203416 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.203425 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:14.203431 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:14.203492 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:14.229313 1329399 cri.go:89] found id: ""
	I1217 02:02:14.229336 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.229344 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:14.229351 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:14.229412 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:14.254038 1329399 cri.go:89] found id: ""
	I1217 02:02:14.254108 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.254133 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:14.254146 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:14.254225 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:14.280851 1329399 cri.go:89] found id: ""
	I1217 02:02:14.280921 1329399 logs.go:282] 0 containers: []
	W1217 02:02:14.280937 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:14.280947 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:14.280960 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:14.349364 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:14.349404 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:14.367683 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:14.367712 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:14.432016 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:14.432039 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:14.432051 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:14.462924 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:14.462966 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:16.994644 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:17.009087 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:17.009191 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:17.046248 1329399 cri.go:89] found id: ""
	I1217 02:02:17.046276 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.046296 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:17.046303 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:17.046371 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:17.074525 1329399 cri.go:89] found id: ""
	I1217 02:02:17.074562 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.074572 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:17.074578 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:17.074648 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:17.100864 1329399 cri.go:89] found id: ""
	I1217 02:02:17.100934 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.100959 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:17.100972 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:17.101051 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:17.128032 1329399 cri.go:89] found id: ""
	I1217 02:02:17.128056 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.128067 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:17.128095 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:17.128187 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:17.155055 1329399 cri.go:89] found id: ""
	I1217 02:02:17.155080 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.155089 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:17.155095 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:17.155209 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:17.183267 1329399 cri.go:89] found id: ""
	I1217 02:02:17.183291 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.183301 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:17.183308 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:17.183394 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:17.209994 1329399 cri.go:89] found id: ""
	I1217 02:02:17.210018 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.210027 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:17.210033 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:17.210141 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:17.236266 1329399 cri.go:89] found id: ""
	I1217 02:02:17.236294 1329399 logs.go:282] 0 containers: []
	W1217 02:02:17.236304 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:17.236314 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:17.236325 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:17.265698 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:17.265728 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:17.334471 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:17.334515 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:17.352723 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:17.352754 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:17.417609 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:17.417632 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:17.417645 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:19.950192 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:19.960392 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:19.960487 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:19.996139 1329399 cri.go:89] found id: ""
	I1217 02:02:19.996169 1329399 logs.go:282] 0 containers: []
	W1217 02:02:19.996178 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:19.996185 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:19.996246 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:20.035428 1329399 cri.go:89] found id: ""
	I1217 02:02:20.035454 1329399 logs.go:282] 0 containers: []
	W1217 02:02:20.035463 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:20.035470 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:20.035537 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:20.070507 1329399 cri.go:89] found id: ""
	I1217 02:02:20.070533 1329399 logs.go:282] 0 containers: []
	W1217 02:02:20.070542 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:20.070549 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:20.070612 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:20.102856 1329399 cri.go:89] found id: ""
	I1217 02:02:20.102881 1329399 logs.go:282] 0 containers: []
	W1217 02:02:20.102891 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:20.102898 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:20.102961 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:20.131284 1329399 cri.go:89] found id: ""
	I1217 02:02:20.131309 1329399 logs.go:282] 0 containers: []
	W1217 02:02:20.131317 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:20.131324 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:20.131383 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:20.160767 1329399 cri.go:89] found id: ""
	I1217 02:02:20.160790 1329399 logs.go:282] 0 containers: []
	W1217 02:02:20.160799 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:20.160807 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:20.160866 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:20.191464 1329399 cri.go:89] found id: ""
	I1217 02:02:20.191494 1329399 logs.go:282] 0 containers: []
	W1217 02:02:20.191503 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:20.191514 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:20.191584 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:20.217077 1329399 cri.go:89] found id: ""
	I1217 02:02:20.217102 1329399 logs.go:282] 0 containers: []
	W1217 02:02:20.217111 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:20.217120 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:20.217132 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:20.287816 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:20.287856 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:20.305788 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:20.305823 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:20.372465 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:20.372488 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:20.372501 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:20.404483 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:20.404520 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:22.932568 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:22.942919 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:22.942993 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:22.973089 1329399 cri.go:89] found id: ""
	I1217 02:02:22.973109 1329399 logs.go:282] 0 containers: []
	W1217 02:02:22.973118 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:22.973124 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:22.973187 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:23.009165 1329399 cri.go:89] found id: ""
	I1217 02:02:23.009196 1329399 logs.go:282] 0 containers: []
	W1217 02:02:23.009206 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:23.009213 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:23.009283 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:23.038878 1329399 cri.go:89] found id: ""
	I1217 02:02:23.038906 1329399 logs.go:282] 0 containers: []
	W1217 02:02:23.038916 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:23.038922 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:23.038980 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:23.076284 1329399 cri.go:89] found id: ""
	I1217 02:02:23.076313 1329399 logs.go:282] 0 containers: []
	W1217 02:02:23.076323 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:23.076329 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:23.076394 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:23.106407 1329399 cri.go:89] found id: ""
	I1217 02:02:23.106433 1329399 logs.go:282] 0 containers: []
	W1217 02:02:23.106442 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:23.106448 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:23.106507 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:23.132357 1329399 cri.go:89] found id: ""
	I1217 02:02:23.132381 1329399 logs.go:282] 0 containers: []
	W1217 02:02:23.132389 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:23.132396 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:23.132481 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:23.158264 1329399 cri.go:89] found id: ""
	I1217 02:02:23.158288 1329399 logs.go:282] 0 containers: []
	W1217 02:02:23.158297 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:23.158304 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:23.158360 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:23.185091 1329399 cri.go:89] found id: ""
	I1217 02:02:23.185120 1329399 logs.go:282] 0 containers: []
	W1217 02:02:23.185130 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:23.185139 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:23.185151 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:23.256072 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:23.256119 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:23.274511 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:23.274548 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:23.337809 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:23.337872 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:23.337901 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:23.374283 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:23.374329 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:25.902708 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:25.912851 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:25.912925 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:25.940084 1329399 cri.go:89] found id: ""
	I1217 02:02:25.940107 1329399 logs.go:282] 0 containers: []
	W1217 02:02:25.940115 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:25.940121 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:25.940180 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:25.968807 1329399 cri.go:89] found id: ""
	I1217 02:02:25.968833 1329399 logs.go:282] 0 containers: []
	W1217 02:02:25.968842 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:25.968849 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:25.968907 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:25.993498 1329399 cri.go:89] found id: ""
	I1217 02:02:25.993521 1329399 logs.go:282] 0 containers: []
	W1217 02:02:25.993530 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:25.993538 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:25.993599 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:26.029872 1329399 cri.go:89] found id: ""
	I1217 02:02:26.029903 1329399 logs.go:282] 0 containers: []
	W1217 02:02:26.029911 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:26.029918 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:26.029983 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:26.060738 1329399 cri.go:89] found id: ""
	I1217 02:02:26.060760 1329399 logs.go:282] 0 containers: []
	W1217 02:02:26.060769 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:26.060775 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:26.060845 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:26.089382 1329399 cri.go:89] found id: ""
	I1217 02:02:26.089405 1329399 logs.go:282] 0 containers: []
	W1217 02:02:26.089414 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:26.089421 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:26.089482 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:26.116367 1329399 cri.go:89] found id: ""
	I1217 02:02:26.116390 1329399 logs.go:282] 0 containers: []
	W1217 02:02:26.116399 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:26.116405 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:26.116488 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:26.141998 1329399 cri.go:89] found id: ""
	I1217 02:02:26.142022 1329399 logs.go:282] 0 containers: []
	W1217 02:02:26.142030 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:26.142039 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:26.142052 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:26.159809 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:26.159841 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:26.225098 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:26.225121 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:26.225133 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:26.257132 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:26.257169 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:26.286247 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:26.286278 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:28.860549 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:28.872983 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:28.873057 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:28.909884 1329399 cri.go:89] found id: ""
	I1217 02:02:28.909911 1329399 logs.go:282] 0 containers: []
	W1217 02:02:28.909920 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:28.909926 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:28.909987 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:28.945046 1329399 cri.go:89] found id: ""
	I1217 02:02:28.945074 1329399 logs.go:282] 0 containers: []
	W1217 02:02:28.945083 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:28.945095 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:28.945165 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:28.981971 1329399 cri.go:89] found id: ""
	I1217 02:02:28.981999 1329399 logs.go:282] 0 containers: []
	W1217 02:02:28.982008 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:28.982020 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:28.982101 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:29.024820 1329399 cri.go:89] found id: ""
	I1217 02:02:29.024848 1329399 logs.go:282] 0 containers: []
	W1217 02:02:29.024857 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:29.024863 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:29.024922 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:29.076552 1329399 cri.go:89] found id: ""
	I1217 02:02:29.076580 1329399 logs.go:282] 0 containers: []
	W1217 02:02:29.076590 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:29.076596 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:29.076660 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:29.137010 1329399 cri.go:89] found id: ""
	I1217 02:02:29.137033 1329399 logs.go:282] 0 containers: []
	W1217 02:02:29.137042 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:29.137049 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:29.137112 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:29.179599 1329399 cri.go:89] found id: ""
	I1217 02:02:29.179621 1329399 logs.go:282] 0 containers: []
	W1217 02:02:29.179630 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:29.179637 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:29.179698 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:29.210334 1329399 cri.go:89] found id: ""
	I1217 02:02:29.210357 1329399 logs.go:282] 0 containers: []
	W1217 02:02:29.210371 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:29.210381 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:29.210392 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:29.234680 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:29.234720 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:29.302852 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:29.302874 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:29.302888 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:29.332793 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:29.332827 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:29.365055 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:29.365083 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:31.931748 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:31.942932 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:31.943005 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:31.988745 1329399 cri.go:89] found id: ""
	I1217 02:02:31.988764 1329399 logs.go:282] 0 containers: []
	W1217 02:02:31.988771 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:31.988777 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:31.988823 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:32.022400 1329399 cri.go:89] found id: ""
	I1217 02:02:32.022422 1329399 logs.go:282] 0 containers: []
	W1217 02:02:32.022430 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:32.022437 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:32.022497 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:32.059412 1329399 cri.go:89] found id: ""
	I1217 02:02:32.059433 1329399 logs.go:282] 0 containers: []
	W1217 02:02:32.059442 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:32.059448 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:32.059505 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:32.091714 1329399 cri.go:89] found id: ""
	I1217 02:02:32.091736 1329399 logs.go:282] 0 containers: []
	W1217 02:02:32.091744 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:32.091751 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:32.091809 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:32.124145 1329399 cri.go:89] found id: ""
	I1217 02:02:32.124167 1329399 logs.go:282] 0 containers: []
	W1217 02:02:32.124175 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:32.124182 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:32.124244 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:32.158796 1329399 cri.go:89] found id: ""
	I1217 02:02:32.158872 1329399 logs.go:282] 0 containers: []
	W1217 02:02:32.158897 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:32.158919 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:32.159003 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:32.191468 1329399 cri.go:89] found id: ""
	I1217 02:02:32.191494 1329399 logs.go:282] 0 containers: []
	W1217 02:02:32.191528 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:32.191542 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:32.191622 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:32.229863 1329399 cri.go:89] found id: ""
	I1217 02:02:32.229886 1329399 logs.go:282] 0 containers: []
	W1217 02:02:32.229894 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:32.229902 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:32.229969 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:32.249209 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:32.249244 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:32.416355 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:32.416378 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:32.416392 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:32.451570 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:32.451603 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:32.495107 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:32.495136 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:35.076249 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:35.086685 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:35.086755 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:35.113164 1329399 cri.go:89] found id: ""
	I1217 02:02:35.113188 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.113196 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:35.113203 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:35.113262 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:35.143437 1329399 cri.go:89] found id: ""
	I1217 02:02:35.143465 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.143476 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:35.143482 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:35.143544 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:35.169254 1329399 cri.go:89] found id: ""
	I1217 02:02:35.169278 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.169286 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:35.169292 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:35.169353 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:35.197860 1329399 cri.go:89] found id: ""
	I1217 02:02:35.197887 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.197898 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:35.197905 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:35.197964 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:35.228451 1329399 cri.go:89] found id: ""
	I1217 02:02:35.228485 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.228495 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:35.228507 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:35.228576 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:35.278330 1329399 cri.go:89] found id: ""
	I1217 02:02:35.278359 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.278368 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:35.278375 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:35.278433 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:35.367512 1329399 cri.go:89] found id: ""
	I1217 02:02:35.367528 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.367536 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:35.367542 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:35.367588 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:35.404385 1329399 cri.go:89] found id: ""
	I1217 02:02:35.404440 1329399 logs.go:282] 0 containers: []
	W1217 02:02:35.404450 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:35.404460 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:35.404472 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:35.487145 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:35.487182 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:35.511652 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:35.511685 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:35.612543 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:35.612617 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:35.612658 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:35.649105 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:35.649143 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:38.192032 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:38.202456 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:38.202543 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:38.228887 1329399 cri.go:89] found id: ""
	I1217 02:02:38.228909 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.228917 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:38.228928 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:38.228989 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:38.260967 1329399 cri.go:89] found id: ""
	I1217 02:02:38.260991 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.261000 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:38.261005 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:38.261067 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:38.290833 1329399 cri.go:89] found id: ""
	I1217 02:02:38.290855 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.290863 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:38.290869 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:38.290926 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:38.320882 1329399 cri.go:89] found id: ""
	I1217 02:02:38.320907 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.320916 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:38.320923 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:38.320983 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:38.349153 1329399 cri.go:89] found id: ""
	I1217 02:02:38.349185 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.349194 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:38.349200 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:38.349263 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:38.374985 1329399 cri.go:89] found id: ""
	I1217 02:02:38.375013 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.375023 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:38.375029 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:38.375151 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:38.401245 1329399 cri.go:89] found id: ""
	I1217 02:02:38.401272 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.401282 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:38.401288 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:38.401348 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:38.427454 1329399 cri.go:89] found id: ""
	I1217 02:02:38.427481 1329399 logs.go:282] 0 containers: []
	W1217 02:02:38.427491 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:38.427500 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:38.427539 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:38.494530 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:38.494569 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:38.512853 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:38.512935 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:38.581253 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:38.581320 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:38.581349 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:38.611943 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:38.611981 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:41.144266 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:41.155004 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:41.155088 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:41.185376 1329399 cri.go:89] found id: ""
	I1217 02:02:41.185400 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.185408 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:41.185414 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:41.185471 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:41.212738 1329399 cri.go:89] found id: ""
	I1217 02:02:41.212764 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.212773 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:02:41.212779 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:41.212841 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:41.237587 1329399 cri.go:89] found id: ""
	I1217 02:02:41.237615 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.237623 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:02:41.237630 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:41.237711 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:41.287399 1329399 cri.go:89] found id: ""
	I1217 02:02:41.287424 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.287433 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:41.287440 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:41.287498 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:41.325569 1329399 cri.go:89] found id: ""
	I1217 02:02:41.325593 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.325603 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:41.325609 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:41.325669 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:41.354639 1329399 cri.go:89] found id: ""
	I1217 02:02:41.354664 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.354672 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:41.354679 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:41.354736 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:41.380768 1329399 cri.go:89] found id: ""
	I1217 02:02:41.380799 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.380813 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:41.380820 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:02:41.380898 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:02:41.410788 1329399 cri.go:89] found id: ""
	I1217 02:02:41.410812 1329399 logs.go:282] 0 containers: []
	W1217 02:02:41.410820 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:02:41.410830 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:41.410842 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:41.486545 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:41.486583 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:41.505341 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:41.505383 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:41.571751 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:41.571773 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:02:41.571786 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:02:41.603778 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:02:41.603817 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:02:44.136331 1329399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:02:44.146953 1329399 kubeadm.go:602] duration metric: took 4m2.802865381s to restartPrimaryControlPlane
	W1217 02:02:44.147022 1329399 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 02:02:44.147099 1329399 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 02:02:44.564749 1329399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:02:44.577549 1329399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 02:02:44.585759 1329399 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:02:44.585826 1329399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:02:44.593473 1329399 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:02:44.593493 1329399 kubeadm.go:158] found existing configuration files:
	
	I1217 02:02:44.593548 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:02:44.601140 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:02:44.601204 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:02:44.608968 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:02:44.617019 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:02:44.617089 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:02:44.624869 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:02:44.633238 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:02:44.633330 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:02:44.641132 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:02:44.650447 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:02:44.650515 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:02:44.658309 1329399 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:02:44.697611 1329399 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:02:44.697892 1329399 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:02:44.783923 1329399 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:02:44.784026 1329399 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 02:02:44.784097 1329399 kubeadm.go:319] OS: Linux
	I1217 02:02:44.784173 1329399 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:02:44.784250 1329399 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:02:44.784323 1329399 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:02:44.784399 1329399 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:02:44.784481 1329399 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:02:44.784554 1329399 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:02:44.784620 1329399 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:02:44.784696 1329399 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:02:44.784769 1329399 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:02:44.853884 1329399 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:02:44.854048 1329399 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:02:44.854184 1329399 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:02:44.862708 1329399 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:02:44.867090 1329399 out.go:252]   - Generating certificates and keys ...
	I1217 02:02:44.867201 1329399 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:02:44.867274 1329399 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:02:44.867406 1329399 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:02:44.867504 1329399 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:02:44.867606 1329399 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:02:44.867671 1329399 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:02:44.867746 1329399 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:02:44.867816 1329399 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:02:44.867900 1329399 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:02:44.867985 1329399 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:02:44.868028 1329399 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:02:44.868092 1329399 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:02:45.070804 1329399 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:02:45.363509 1329399 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:02:45.447905 1329399 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:02:45.712144 1329399 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:02:45.941569 1329399 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:02:45.942701 1329399 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:02:45.954302 1329399 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:02:45.958462 1329399 out.go:252]   - Booting up control plane ...
	I1217 02:02:45.958576 1329399 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:02:45.958660 1329399 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:02:45.958763 1329399 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:02:45.980502 1329399 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:02:45.980614 1329399 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:02:45.991296 1329399 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:02:45.991396 1329399 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:02:45.991447 1329399 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:02:46.161688 1329399 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:02:46.161849 1329399 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:06:46.159939 1329399 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000331006s
	I1217 02:06:46.159973 1329399 kubeadm.go:319] 
	I1217 02:06:46.160031 1329399 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:06:46.160065 1329399 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:06:46.160169 1329399 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:06:46.160202 1329399 kubeadm.go:319] 
	I1217 02:06:46.160317 1329399 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:06:46.160367 1329399 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:06:46.160403 1329399 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:06:46.160407 1329399 kubeadm.go:319] 
	I1217 02:06:46.165202 1329399 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:06:46.165693 1329399 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:06:46.165845 1329399 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:06:46.166107 1329399 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 02:06:46.166114 1329399 kubeadm.go:319] 
	W1217 02:06:46.166324 1329399 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000331006s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 02:06:46.166441 1329399 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 02:06:46.166770 1329399 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:06:46.580532 1329399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:06:46.593838 1329399 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:06:46.593917 1329399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:06:46.601842 1329399 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:06:46.601865 1329399 kubeadm.go:158] found existing configuration files:
	
	I1217 02:06:46.601939 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:06:46.609872 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:06:46.609940 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:06:46.618275 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:06:46.626479 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:06:46.626560 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:06:46.634335 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:06:46.642624 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:06:46.642717 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:06:46.650527 1329399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:06:46.658682 1329399 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:06:46.658794 1329399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:06:46.666530 1329399 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:06:46.705777 1329399 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:06:46.706065 1329399 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:06:46.783093 1329399 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:06:46.783206 1329399 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 02:06:46.783259 1329399 kubeadm.go:319] OS: Linux
	I1217 02:06:46.783327 1329399 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:06:46.783398 1329399 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:06:46.783466 1329399 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:06:46.783527 1329399 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:06:46.783595 1329399 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:06:46.783660 1329399 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:06:46.783729 1329399 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:06:46.783800 1329399 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:06:46.783870 1329399 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:06:46.854572 1329399 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:06:46.854748 1329399 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:06:46.854878 1329399 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:06:46.864842 1329399 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:06:46.867999 1329399 out.go:252]   - Generating certificates and keys ...
	I1217 02:06:46.868156 1329399 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:06:46.868247 1329399 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:06:46.868376 1329399 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:06:46.868512 1329399 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:06:46.868621 1329399 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:06:46.868688 1329399 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:06:46.868758 1329399 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:06:46.868827 1329399 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:06:46.868924 1329399 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:06:46.869002 1329399 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:06:46.869045 1329399 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:06:46.869106 1329399 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:06:47.137867 1329399 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:06:47.422214 1329399 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:06:47.956682 1329399 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:06:48.100547 1329399 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:06:48.477832 1329399 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:06:48.478517 1329399 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:06:48.481120 1329399 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:06:48.484139 1329399 out.go:252]   - Booting up control plane ...
	I1217 02:06:48.484244 1329399 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:06:48.484318 1329399 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:06:48.493125 1329399 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:06:48.508769 1329399 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:06:48.508873 1329399 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:06:48.517815 1329399 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:06:48.517916 1329399 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:06:48.517955 1329399 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:06:48.689867 1329399 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:06:48.689984 1329399 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:10:48.687037 1329399 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001008389s
	I1217 02:10:48.687301 1329399 kubeadm.go:319] 
	I1217 02:10:48.687377 1329399 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:10:48.687411 1329399 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:10:48.687518 1329399 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:10:48.687524 1329399 kubeadm.go:319] 
	I1217 02:10:48.687636 1329399 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:10:48.687668 1329399 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:10:48.687697 1329399 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:10:48.687701 1329399 kubeadm.go:319] 
	I1217 02:10:48.693230 1329399 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:10:48.693631 1329399 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:10:48.693735 1329399 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:10:48.693999 1329399 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:10:48.694006 1329399 kubeadm.go:319] 
	I1217 02:10:48.694071 1329399 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:10:48.694126 1329399 kubeadm.go:403] duration metric: took 12m7.416827156s to StartCluster
	I1217 02:10:48.694159 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:48.694223 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:48.727844 1329399 cri.go:89] found id: ""
	I1217 02:10:48.727866 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.727875 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:48.727882 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:10:48.727940 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:48.774886 1329399 cri.go:89] found id: ""
	I1217 02:10:48.774909 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.774917 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:10:48.774923 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:10:48.774986 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:48.824692 1329399 cri.go:89] found id: ""
	I1217 02:10:48.824715 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.824723 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:10:48.824729 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:48.824787 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:48.869036 1329399 cri.go:89] found id: ""
	I1217 02:10:48.869061 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.869070 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:48.869078 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:48.869138 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:48.899590 1329399 cri.go:89] found id: ""
	I1217 02:10:48.899618 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.899626 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:48.899637 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:48.899697 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:48.941572 1329399 cri.go:89] found id: ""
	I1217 02:10:48.941599 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.941609 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:48.941615 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:48.941739 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:48.978775 1329399 cri.go:89] found id: ""
	I1217 02:10:48.978796 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.978805 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:48.978812 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:10:48.978936 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:10:49.008921 1329399 cri.go:89] found id: ""
	I1217 02:10:49.008946 1329399 logs.go:282] 0 containers: []
	W1217 02:10:49.008954 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:10:49.008964 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:49.009051 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:49.029773 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:49.029818 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:49.122514 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:49.122537 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:10:49.122550 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:10:49.162384 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:10:49.162420 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:49.210379 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:49.210410 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:49.300335 1329399 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:10:49.300890 1329399 out.go:285] * 
	* 
	W1217 02:10:49.300990 1329399 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:10:49.301188 1329399 out.go:285] * 
	* 
	W1217 02:10:49.303377 1329399 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:10:49.310826 1329399 out.go:203] 
	W1217 02:10:49.314886 1329399 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:10:49.315167 1329399 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:10:49.315197 1329399 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:10:49.318835 1329399 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-813956 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-813956 version --output=json: exit status 1 (246.199679ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-17 02:10:50.622400576 +0000 UTC m=+6133.691249832
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-813956
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-813956:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1dd6d1d45ff4d3276de7b90e27c5d967de1226f314a0034a5fc80e9a4119baba",
	        "Created": "2025-12-17T01:57:47.5385414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1329530,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:58:27.070671431Z",
	            "FinishedAt": "2025-12-17T01:58:25.922258321Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/1dd6d1d45ff4d3276de7b90e27c5d967de1226f314a0034a5fc80e9a4119baba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1dd6d1d45ff4d3276de7b90e27c5d967de1226f314a0034a5fc80e9a4119baba/hostname",
	        "HostsPath": "/var/lib/docker/containers/1dd6d1d45ff4d3276de7b90e27c5d967de1226f314a0034a5fc80e9a4119baba/hosts",
	        "LogPath": "/var/lib/docker/containers/1dd6d1d45ff4d3276de7b90e27c5d967de1226f314a0034a5fc80e9a4119baba/1dd6d1d45ff4d3276de7b90e27c5d967de1226f314a0034a5fc80e9a4119baba-json.log",
	        "Name": "/kubernetes-upgrade-813956",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-813956:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-813956",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1dd6d1d45ff4d3276de7b90e27c5d967de1226f314a0034a5fc80e9a4119baba",
	                "LowerDir": "/var/lib/docker/overlay2/0ea53c45d3546d7cb852ed5f12235df9dd8278375496e5d1904309b1287e1547-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ea53c45d3546d7cb852ed5f12235df9dd8278375496e5d1904309b1287e1547/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ea53c45d3546d7cb852ed5f12235df9dd8278375496e5d1904309b1287e1547/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ea53c45d3546d7cb852ed5f12235df9dd8278375496e5d1904309b1287e1547/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-813956",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-813956/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-813956",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-813956",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-813956",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ea62ccbb04986c7b8c09f8ebf8be3736198b928e4959f1a4931a9ef4433c539",
	            "SandboxKey": "/var/run/docker/netns/1ea62ccbb049",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-813956": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:05:10:59:0c:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8e9ec39abb2aa68e1c175327320e8de788d3c11d022f954c99c93aca0e3e98f7",
	                    "EndpointID": "eb32124d98af56e935b26a63d41808a4bcf2855a949951b4aadb9bd1e247d13b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-813956",
	                        "1dd6d1d45ff4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-813956 -n kubernetes-upgrade-813956
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-813956 -n kubernetes-upgrade-813956: exit status 2 (433.500385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-813956 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-262920 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p missing-upgrade-935345 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-935345    │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p NoKubernetes-262920 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ ssh     │ -p NoKubernetes-262920 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │                     │
	│ stop    │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p NoKubernetes-262920 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ ssh     │ -p NoKubernetes-262920 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │                     │
	│ delete  │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:58 UTC │
	│ delete  │ -p missing-upgrade-935345                                                                                                                       │ missing-upgrade-935345    │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p stopped-upgrade-925123 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-925123    │ jenkins │ v1.35.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ stop    │ -p kubernetes-upgrade-813956                                                                                                                    │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │                     │
	│ stop    │ stopped-upgrade-925123 stop                                                                                                                     │ stopped-upgrade-925123    │ jenkins │ v1.35.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p stopped-upgrade-925123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-925123    │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 02:03 UTC │
	│ delete  │ -p stopped-upgrade-925123                                                                                                                       │ stopped-upgrade-925123    │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p running-upgrade-842996 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-842996    │ jenkins │ v1.35.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:04 UTC │
	│ start   │ -p running-upgrade-842996 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-842996    │ jenkins │ v1.37.0 │ 17 Dec 25 02:04 UTC │ 17 Dec 25 02:08 UTC │
	│ delete  │ -p running-upgrade-842996                                                                                                                       │ running-upgrade-842996    │ jenkins │ v1.37.0 │ 17 Dec 25 02:08 UTC │ 17 Dec 25 02:08 UTC │
	│ start   │ -p pause-666844 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:08 UTC │ 17 Dec 25 02:09 UTC │
	│ start   │ -p pause-666844 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:09 UTC │ 17 Dec 25 02:10 UTC │
	│ pause   │ -p pause-666844 --alsologtostderr -v=5                                                                                                          │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:10 UTC │                     │
	│ delete  │ -p pause-666844                                                                                                                                 │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:10 UTC │ 17 Dec 25 02:10 UTC │
	│ start   │ -p force-systemd-flag-485146 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-485146 │ jenkins │ v1.37.0 │ 17 Dec 25 02:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:10:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:10:33.573610 1364480 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:10:33.573753 1364480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:10:33.573765 1364480 out.go:374] Setting ErrFile to fd 2...
	I1217 02:10:33.573770 1364480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:10:33.574051 1364480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 02:10:33.574500 1364480 out.go:368] Setting JSON to false
	I1217 02:10:33.575489 1364480 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28384,"bootTime":1765909050,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 02:10:33.575562 1364480 start.go:143] virtualization:  
	I1217 02:10:33.579404 1364480 out.go:179] * [force-systemd-flag-485146] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:10:33.584337 1364480 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:10:33.584355 1364480 notify.go:221] Checking for updates...
	I1217 02:10:33.588843 1364480 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:10:33.592348 1364480 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 02:10:33.595789 1364480 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 02:10:33.599146 1364480 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:10:33.602256 1364480 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:10:33.605976 1364480 config.go:182] Loaded profile config "kubernetes-upgrade-813956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 02:10:33.606142 1364480 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:10:33.630863 1364480 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:10:33.630992 1364480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:10:33.699346 1364480 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:10:33.689680655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:10:33.699459 1364480 docker.go:319] overlay module found
	I1217 02:10:33.702962 1364480 out.go:179] * Using the docker driver based on user configuration
	I1217 02:10:33.706008 1364480 start.go:309] selected driver: docker
	I1217 02:10:33.706028 1364480 start.go:927] validating driver "docker" against <nil>
	I1217 02:10:33.706043 1364480 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:10:33.706823 1364480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:10:33.763374 1364480 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 02:10:33.753982077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:10:33.763541 1364480 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 02:10:33.763768 1364480 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 02:10:33.766852 1364480 out.go:179] * Using Docker driver with root privileges
	I1217 02:10:33.769754 1364480 cni.go:84] Creating CNI manager for ""
	I1217 02:10:33.769845 1364480 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 02:10:33.769861 1364480 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 02:10:33.769958 1364480 start.go:353] cluster config:
	{Name:force-systemd-flag-485146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-485146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:10:33.775229 1364480 out.go:179] * Starting "force-systemd-flag-485146" primary control-plane node in "force-systemd-flag-485146" cluster
	I1217 02:10:33.778201 1364480 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 02:10:33.781199 1364480 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:10:33.784131 1364480 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 02:10:33.784179 1364480 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 02:10:33.784192 1364480 cache.go:65] Caching tarball of preloaded images
	I1217 02:10:33.784216 1364480 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:10:33.784292 1364480 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 02:10:33.784303 1364480 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 02:10:33.784406 1364480 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/config.json ...
	I1217 02:10:33.784450 1364480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/config.json: {Name:mke9a644d254cfdb542865a8d02bc2a0834a88d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:33.803860 1364480 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:10:33.803886 1364480 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:10:33.803907 1364480 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:10:33.803941 1364480 start.go:360] acquireMachinesLock for force-systemd-flag-485146: {Name:mke381d1ca3c346c6bde59471c7ae4c3940feb37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:10:33.804050 1364480 start.go:364] duration metric: took 87.768µs to acquireMachinesLock for "force-systemd-flag-485146"
	I1217 02:10:33.804082 1364480 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-485146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-485146 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 02:10:33.804156 1364480 start.go:125] createHost starting for "" (driver="docker")
	I1217 02:10:33.809462 1364480 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 02:10:33.809710 1364480 start.go:159] libmachine.API.Create for "force-systemd-flag-485146" (driver="docker")
	I1217 02:10:33.809749 1364480 client.go:173] LocalClient.Create starting
	I1217 02:10:33.809815 1364480 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem
	I1217 02:10:33.809855 1364480 main.go:143] libmachine: Decoding PEM data...
	I1217 02:10:33.809882 1364480 main.go:143] libmachine: Parsing certificate...
	I1217 02:10:33.809940 1364480 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem
	I1217 02:10:33.809965 1364480 main.go:143] libmachine: Decoding PEM data...
	I1217 02:10:33.809980 1364480 main.go:143] libmachine: Parsing certificate...
	I1217 02:10:33.810337 1364480 cli_runner.go:164] Run: docker network inspect force-systemd-flag-485146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 02:10:33.826920 1364480 cli_runner.go:211] docker network inspect force-systemd-flag-485146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 02:10:33.827031 1364480 network_create.go:284] running [docker network inspect force-systemd-flag-485146] to gather additional debugging logs...
	I1217 02:10:33.827052 1364480 cli_runner.go:164] Run: docker network inspect force-systemd-flag-485146
	W1217 02:10:33.843474 1364480 cli_runner.go:211] docker network inspect force-systemd-flag-485146 returned with exit code 1
	I1217 02:10:33.843509 1364480 network_create.go:287] error running [docker network inspect force-systemd-flag-485146]: docker network inspect force-systemd-flag-485146: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-485146 not found
	I1217 02:10:33.843522 1364480 network_create.go:289] output of [docker network inspect force-systemd-flag-485146]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-485146 not found
	
	** /stderr **
	I1217 02:10:33.843633 1364480 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:10:33.860462 1364480 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e224ccab4890 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:8b:ae:d4:d3:20} reservation:<nil>}
	I1217 02:10:33.860847 1364480 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f9d0d1857f8f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:f6:28:c1:ac:c5:59} reservation:<nil>}
	I1217 02:10:33.861106 1364480 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-74da85846a49 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:25:4b:f3:c2:61} reservation:<nil>}
	I1217 02:10:33.861369 1364480 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8e9ec39abb2a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:08:58:4d:1d:c4} reservation:<nil>}
	I1217 02:10:33.861815 1364480 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a201e0}
	I1217 02:10:33.861837 1364480 network_create.go:124] attempt to create docker network force-systemd-flag-485146 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 02:10:33.861899 1364480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-485146 force-systemd-flag-485146
	I1217 02:10:33.917775 1364480 network_create.go:108] docker network force-systemd-flag-485146 192.168.85.0/24 created
	I1217 02:10:33.917812 1364480 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-485146" container
	I1217 02:10:33.917888 1364480 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 02:10:33.935550 1364480 cli_runner.go:164] Run: docker volume create force-systemd-flag-485146 --label name.minikube.sigs.k8s.io=force-systemd-flag-485146 --label created_by.minikube.sigs.k8s.io=true
	I1217 02:10:33.954265 1364480 oci.go:103] Successfully created a docker volume force-systemd-flag-485146
	I1217 02:10:33.954354 1364480 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-485146-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-485146 --entrypoint /usr/bin/test -v force-systemd-flag-485146:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 02:10:34.496676 1364480 oci.go:107] Successfully prepared a docker volume force-systemd-flag-485146
	I1217 02:10:34.496749 1364480 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 02:10:34.496762 1364480 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 02:10:34.496852 1364480 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-485146:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 02:10:38.502033 1364480 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-485146:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.005114903s)
	I1217 02:10:38.502065 1364480 kic.go:203] duration metric: took 4.005299342s to extract preloaded images to volume ...
	W1217 02:10:38.502199 1364480 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1217 02:10:38.502314 1364480 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 02:10:38.561338 1364480 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-485146 --name force-systemd-flag-485146 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-485146 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-485146 --network force-systemd-flag-485146 --ip 192.168.85.2 --volume force-systemd-flag-485146:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 02:10:38.895544 1364480 cli_runner.go:164] Run: docker container inspect force-systemd-flag-485146 --format={{.State.Running}}
	I1217 02:10:38.915707 1364480 cli_runner.go:164] Run: docker container inspect force-systemd-flag-485146 --format={{.State.Status}}
	I1217 02:10:38.940871 1364480 cli_runner.go:164] Run: docker exec force-systemd-flag-485146 stat /var/lib/dpkg/alternatives/iptables
	I1217 02:10:38.991520 1364480 oci.go:144] the created container "force-systemd-flag-485146" has a running status.
	I1217 02:10:38.991548 1364480 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa...
	I1217 02:10:39.552238 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1217 02:10:39.552305 1364480 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 02:10:39.571963 1364480 cli_runner.go:164] Run: docker container inspect force-systemd-flag-485146 --format={{.State.Status}}
	I1217 02:10:39.589890 1364480 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 02:10:39.589913 1364480 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-485146 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 02:10:39.636369 1364480 cli_runner.go:164] Run: docker container inspect force-systemd-flag-485146 --format={{.State.Status}}
	I1217 02:10:39.654587 1364480 machine.go:94] provisionDockerMachine start ...
	I1217 02:10:39.654700 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:39.672261 1364480 main.go:143] libmachine: Using SSH client type: native
	I1217 02:10:39.672674 1364480 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34153 <nil> <nil>}
	I1217 02:10:39.672696 1364480 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:10:39.673377 1364480 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:10:42.808926 1364480 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-485146
	
	I1217 02:10:42.808957 1364480 ubuntu.go:182] provisioning hostname "force-systemd-flag-485146"
	I1217 02:10:42.809045 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:42.829433 1364480 main.go:143] libmachine: Using SSH client type: native
	I1217 02:10:42.829763 1364480 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34153 <nil> <nil>}
	I1217 02:10:42.829782 1364480 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-485146 && echo "force-systemd-flag-485146" | sudo tee /etc/hostname
	I1217 02:10:42.974700 1364480 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-485146
	
	I1217 02:10:42.974860 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:42.993464 1364480 main.go:143] libmachine: Using SSH client type: native
	I1217 02:10:42.993811 1364480 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34153 <nil> <nil>}
	I1217 02:10:42.993835 1364480 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-485146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-485146/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-485146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:10:43.129139 1364480 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:10:43.129169 1364480 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 02:10:43.129229 1364480 ubuntu.go:190] setting up certificates
	I1217 02:10:43.129240 1364480 provision.go:84] configureAuth start
	I1217 02:10:43.129358 1364480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-485146
	I1217 02:10:43.147153 1364480 provision.go:143] copyHostCerts
	I1217 02:10:43.147201 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 02:10:43.147235 1364480 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 02:10:43.147248 1364480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 02:10:43.147327 1364480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 02:10:43.147414 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 02:10:43.147437 1364480 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 02:10:43.147442 1364480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 02:10:43.147471 1364480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 02:10:43.147516 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 02:10:43.147541 1364480 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 02:10:43.147546 1364480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 02:10:43.147576 1364480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 02:10:43.147626 1364480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-485146 san=[127.0.0.1 192.168.85.2 force-systemd-flag-485146 localhost minikube]
	I1217 02:10:43.678185 1364480 provision.go:177] copyRemoteCerts
	I1217 02:10:43.678292 1364480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:10:43.678349 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:43.696892 1364480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34153 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa Username:docker}
	I1217 02:10:43.792825 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1217 02:10:43.792895 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:10:43.810571 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1217 02:10:43.810631 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 02:10:43.828547 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1217 02:10:43.828606 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:10:43.847322 1364480 provision.go:87] duration metric: took 718.049474ms to configureAuth
	I1217 02:10:43.847395 1364480 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:10:43.847624 1364480 config.go:182] Loaded profile config "force-systemd-flag-485146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:10:43.847736 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:43.865118 1364480 main.go:143] libmachine: Using SSH client type: native
	I1217 02:10:43.865448 1364480 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34153 <nil> <nil>}
	I1217 02:10:43.865468 1364480 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 02:10:44.157383 1364480 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 02:10:44.157409 1364480 machine.go:97] duration metric: took 4.50279862s to provisionDockerMachine
	I1217 02:10:44.157420 1364480 client.go:176] duration metric: took 10.347660195s to LocalClient.Create
	I1217 02:10:44.157438 1364480 start.go:167] duration metric: took 10.347729035s to libmachine.API.Create "force-systemd-flag-485146"
	I1217 02:10:44.157446 1364480 start.go:293] postStartSetup for "force-systemd-flag-485146" (driver="docker")
	I1217 02:10:44.157456 1364480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:10:44.157521 1364480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:10:44.157578 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:44.174587 1364480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34153 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa Username:docker}
	I1217 02:10:44.272897 1364480 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:10:44.276367 1364480 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:10:44.276404 1364480 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:10:44.276437 1364480 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 02:10:44.276515 1364480 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 02:10:44.276648 1364480 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 02:10:44.276661 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /etc/ssl/certs/11365972.pem
	I1217 02:10:44.276769 1364480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:10:44.284392 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 02:10:44.303019 1364480 start.go:296] duration metric: took 145.558489ms for postStartSetup
	I1217 02:10:44.303441 1364480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-485146
	I1217 02:10:44.320370 1364480 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/config.json ...
	I1217 02:10:44.320775 1364480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:10:44.320828 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:44.337850 1364480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34153 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa Username:docker}
	I1217 02:10:44.430398 1364480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:10:44.435338 1364480 start.go:128] duration metric: took 10.631165752s to createHost
	I1217 02:10:44.435367 1364480 start.go:83] releasing machines lock for "force-systemd-flag-485146", held for 10.63130202s
	I1217 02:10:44.435446 1364480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-485146
	I1217 02:10:44.452803 1364480 ssh_runner.go:195] Run: cat /version.json
	I1217 02:10:44.452840 1364480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:10:44.452860 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:44.452903 1364480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-485146
	I1217 02:10:44.476670 1364480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34153 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa Username:docker}
	I1217 02:10:44.487591 1364480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34153 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/force-systemd-flag-485146/id_rsa Username:docker}
	I1217 02:10:44.576260 1364480 ssh_runner.go:195] Run: systemctl --version
	I1217 02:10:44.671714 1364480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 02:10:44.710289 1364480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:10:44.714850 1364480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:10:44.714952 1364480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:10:44.744328 1364480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1217 02:10:44.744358 1364480 start.go:496] detecting cgroup driver to use...
	I1217 02:10:44.744372 1364480 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1217 02:10:44.744491 1364480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:10:44.765580 1364480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:10:44.781469 1364480 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:10:44.781578 1364480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:10:44.801213 1364480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:10:44.822221 1364480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:10:44.939552 1364480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:10:45.089749 1364480 docker.go:234] disabling docker service ...
	I1217 02:10:45.089850 1364480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:10:45.125127 1364480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:10:45.145564 1364480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:10:45.310689 1364480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:10:45.438097 1364480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:10:45.452158 1364480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:10:45.467939 1364480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 02:10:45.468060 1364480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:45.478950 1364480 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 02:10:45.479046 1364480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:45.489592 1364480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:45.499769 1364480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:45.511899 1364480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:10:45.522717 1364480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:45.537046 1364480 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:45.564750 1364480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:45.578184 1364480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:10:45.585907 1364480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:10:45.593588 1364480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:45.704852 1364480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 02:10:45.884880 1364480 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 02:10:45.885003 1364480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 02:10:45.889116 1364480 start.go:564] Will wait 60s for crictl version
	I1217 02:10:45.889226 1364480 ssh_runner.go:195] Run: which crictl
	I1217 02:10:45.893064 1364480 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:10:45.917351 1364480 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 02:10:45.917495 1364480 ssh_runner.go:195] Run: crio --version
	I1217 02:10:45.947954 1364480 ssh_runner.go:195] Run: crio --version
	I1217 02:10:45.983742 1364480 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 02:10:45.986663 1364480 cli_runner.go:164] Run: docker network inspect force-systemd-flag-485146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:10:46.003976 1364480 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:10:46.008904 1364480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:10:46.020799 1364480 kubeadm.go:884] updating cluster {Name:force-systemd-flag-485146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-485146 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:10:46.020917 1364480 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 02:10:46.021000 1364480 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:10:46.057367 1364480 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 02:10:46.057394 1364480 crio.go:433] Images already preloaded, skipping extraction
	I1217 02:10:46.057451 1364480 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:10:46.082271 1364480 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 02:10:46.082296 1364480 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:10:46.082305 1364480 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 02:10:46.082407 1364480 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-485146 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-485146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:10:46.082494 1364480 ssh_runner.go:195] Run: crio config
	I1217 02:10:46.135532 1364480 cni.go:84] Creating CNI manager for ""
	I1217 02:10:46.135556 1364480 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 02:10:46.135595 1364480 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:10:46.135626 1364480 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-485146 NodeName:force-systemd-flag-485146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:10:46.135819 1364480 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-485146"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:10:46.135919 1364480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 02:10:46.144055 1364480 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:10:46.144159 1364480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:10:46.152046 1364480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1217 02:10:46.165547 1364480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 02:10:46.179340 1364480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 02:10:46.193008 1364480 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:10:46.196669 1364480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:10:46.206699 1364480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:46.330479 1364480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:10:46.346515 1364480 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146 for IP: 192.168.85.2
	I1217 02:10:46.346578 1364480 certs.go:195] generating shared ca certs ...
	I1217 02:10:46.346610 1364480 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:46.346789 1364480 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 02:10:46.346880 1364480 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 02:10:46.346916 1364480 certs.go:257] generating profile certs ...
	I1217 02:10:46.346994 1364480 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/client.key
	I1217 02:10:46.347031 1364480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/client.crt with IP's: []
	I1217 02:10:46.540337 1364480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/client.crt ...
	I1217 02:10:46.540372 1364480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/client.crt: {Name:mk6b565432c0e4d013f744dcb6c358ddf5c73d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:46.540598 1364480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/client.key ...
	I1217 02:10:46.540617 1364480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/client.key: {Name:mk66bb6fcdbf2564a863692828cc5496eab3623b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:46.540715 1364480 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.key.256d4e3f
	I1217 02:10:46.540736 1364480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.crt.256d4e3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 02:10:46.651351 1364480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.crt.256d4e3f ...
	I1217 02:10:46.651382 1364480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.crt.256d4e3f: {Name:mkc0963232b93a405c53bb9eeb1f6c48622060a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:46.651567 1364480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.key.256d4e3f ...
	I1217 02:10:46.651583 1364480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.key.256d4e3f: {Name:mk2d0fff3727bf71057bbc2366682c34f1a69566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:46.651679 1364480 certs.go:382] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.crt.256d4e3f -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.crt
	I1217 02:10:46.651772 1364480 certs.go:386] copying /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.key.256d4e3f -> /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.key
	I1217 02:10:46.651838 1364480 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.key
	I1217 02:10:46.651852 1364480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.crt with IP's: []
	I1217 02:10:47.024748 1364480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.crt ...
	I1217 02:10:47.024783 1364480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.crt: {Name:mk5fa52af76f8d066ca09d198fd42f6eb0fc48d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:47.024967 1364480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.key ...
	I1217 02:10:47.024987 1364480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.key: {Name:mk01ffb5873b07778e62e88cb0c4b094467bee53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:47.025073 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 02:10:47.025093 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1217 02:10:47.025105 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 02:10:47.025135 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 02:10:47.025152 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 02:10:47.025165 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 02:10:47.025185 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 02:10:47.025200 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 02:10:47.025257 1364480 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 02:10:47.025301 1364480 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 02:10:47.025314 1364480 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:10:47.025343 1364480 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:10:47.025376 1364480 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:10:47.025404 1364480 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 02:10:47.025450 1364480 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 02:10:47.025484 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> /usr/share/ca-certificates/11365972.pem
	I1217 02:10:47.025501 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:47.025514 1364480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem -> /usr/share/ca-certificates/1136597.pem
	I1217 02:10:47.026108 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:10:47.048485 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:10:47.071857 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:10:47.091077 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:10:47.110653 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 02:10:47.130266 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:10:47.148256 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:10:47.166600 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/force-systemd-flag-485146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:10:47.185230 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 02:10:47.204243 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:10:47.222548 1364480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 02:10:47.241682 1364480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:10:47.258014 1364480 ssh_runner.go:195] Run: openssl version
	I1217 02:10:47.265890 1364480 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 02:10:47.274404 1364480 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 02:10:47.283347 1364480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 02:10:47.287819 1364480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 02:10:47.287936 1364480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 02:10:47.331504 1364480 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:10:47.339307 1364480 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11365972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 02:10:47.347028 1364480 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:47.354711 1364480 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:10:47.363090 1364480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:47.367310 1364480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:47.367378 1364480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:47.408469 1364480 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:10:47.416006 1364480 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 02:10:47.423679 1364480 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 02:10:47.431564 1364480 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 02:10:47.439196 1364480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 02:10:47.443082 1364480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 02:10:47.443155 1364480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 02:10:47.484566 1364480 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:10:47.492096 1364480 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1136597.pem /etc/ssl/certs/51391683.0
	I1217 02:10:47.499607 1364480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:10:47.503141 1364480 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 02:10:47.503195 1364480 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-485146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-485146 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:10:47.503270 1364480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 02:10:47.503329 1364480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:10:47.532016 1364480 cri.go:89] found id: ""
	I1217 02:10:47.532100 1364480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:10:47.540267 1364480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 02:10:47.548322 1364480 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:10:47.548388 1364480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:10:47.557093 1364480 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:10:47.557114 1364480 kubeadm.go:158] found existing configuration files:
	
	I1217 02:10:47.557195 1364480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:10:47.565441 1364480 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:10:47.565592 1364480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:10:47.573342 1364480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:10:47.581648 1364480 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:10:47.581715 1364480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:10:47.589467 1364480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:10:47.597508 1364480 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:10:47.597601 1364480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:10:47.605553 1364480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:10:47.613904 1364480 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:10:47.614015 1364480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:10:47.621552 1364480 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:10:47.664099 1364480 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 02:10:47.664669 1364480 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:10:47.687900 1364480 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:10:47.688004 1364480 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1217 02:10:47.688045 1364480 kubeadm.go:319] OS: Linux
	I1217 02:10:47.688104 1364480 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:10:47.688171 1364480 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:10:47.688235 1364480 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:10:47.688299 1364480 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:10:47.688363 1364480 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:10:47.688460 1364480 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:10:47.688512 1364480 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:10:47.688575 1364480 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:10:47.688646 1364480 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:10:47.757731 1364480 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:10:47.757861 1364480 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:10:47.757964 1364480 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:10:47.767474 1364480 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:10:47.774559 1364480 out.go:252]   - Generating certificates and keys ...
	I1217 02:10:47.774671 1364480 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:10:47.774752 1364480 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:10:47.910055 1364480 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:10:48.687037 1329399 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001008389s
	I1217 02:10:48.687301 1329399 kubeadm.go:319] 
	I1217 02:10:48.687377 1329399 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:10:48.687411 1329399 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:10:48.687518 1329399 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:10:48.687524 1329399 kubeadm.go:319] 
	I1217 02:10:48.687636 1329399 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:10:48.687668 1329399 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:10:48.687697 1329399 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:10:48.687701 1329399 kubeadm.go:319] 
	I1217 02:10:48.693230 1329399 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1217 02:10:48.693631 1329399 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:10:48.693735 1329399 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:10:48.693999 1329399 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:10:48.694006 1329399 kubeadm.go:319] 
	I1217 02:10:48.694071 1329399 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:10:48.694126 1329399 kubeadm.go:403] duration metric: took 12m7.416827156s to StartCluster
	I1217 02:10:48.694159 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:10:48.694223 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:10:48.727844 1329399 cri.go:89] found id: ""
	I1217 02:10:48.727866 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.727875 1329399 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:48.727882 1329399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:10:48.727940 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:10:48.774886 1329399 cri.go:89] found id: ""
	I1217 02:10:48.774909 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.774917 1329399 logs.go:284] No container was found matching "etcd"
	I1217 02:10:48.774923 1329399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:10:48.774986 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:10:48.824692 1329399 cri.go:89] found id: ""
	I1217 02:10:48.824715 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.824723 1329399 logs.go:284] No container was found matching "coredns"
	I1217 02:10:48.824729 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:10:48.824787 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:10:48.869036 1329399 cri.go:89] found id: ""
	I1217 02:10:48.869061 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.869070 1329399 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:48.869078 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:10:48.869138 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:10:48.899590 1329399 cri.go:89] found id: ""
	I1217 02:10:48.899618 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.899626 1329399 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:48.899637 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:10:48.899697 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:10:48.941572 1329399 cri.go:89] found id: ""
	I1217 02:10:48.941599 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.941609 1329399 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:48.941615 1329399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:10:48.941739 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:10:48.978775 1329399 cri.go:89] found id: ""
	I1217 02:10:48.978796 1329399 logs.go:282] 0 containers: []
	W1217 02:10:48.978805 1329399 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:48.978812 1329399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 02:10:48.978936 1329399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 02:10:49.008921 1329399 cri.go:89] found id: ""
	I1217 02:10:49.008946 1329399 logs.go:282] 0 containers: []
	W1217 02:10:49.008954 1329399 logs.go:284] No container was found matching "storage-provisioner"
	I1217 02:10:49.008964 1329399 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:49.009051 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:49.029773 1329399 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:49.029818 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:49.122514 1329399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:49.122537 1329399 logs.go:123] Gathering logs for CRI-O ...
	I1217 02:10:49.122550 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 02:10:49.162384 1329399 logs.go:123] Gathering logs for container status ...
	I1217 02:10:49.162420 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:49.210379 1329399 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:49.210410 1329399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:49.300335 1329399 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:10:49.300890 1329399 out.go:285] * 
	W1217 02:10:49.300990 1329399 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:10:49.301188 1329399 out.go:285] * 
	W1217 02:10:49.303377 1329399 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:10:49.310826 1329399 out.go:203] 
	W1217 02:10:49.314886 1329399 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001008389s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:10:49.315167 1329399 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:10:49.315197 1329399 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:10:49.318835 1329399 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.757567819Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.757609968Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.7576607Z" level=info msg="Create NRI interface"
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.75777136Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.757780467Z" level=info msg="runtime interface created"
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.75779188Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.757798321Z" level=info msg="runtime interface starting up..."
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.757804557Z" level=info msg="starting plugins..."
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.757816873Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 01:58:33 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T01:58:33.757871862Z" level=info msg="No systemd watchdog enabled"
	Dec 17 01:58:33 kubernetes-upgrade-813956 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 17 02:02:44 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:02:44.858332728Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=700ddc6c-283e-4b47-bbf4-f1db9e6c86ff name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:02:44 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:02:44.859260732Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5053c0c5-73c3-47e9-a872-97291dfbc972 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:02:44 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:02:44.859798706Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b2f134e8-e700-4fc5-82f1-70666816b299 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:02:44 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:02:44.860288427Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8914c4c-8655-4cf1-91cf-f47fc55b85a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:02:44 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:02:44.860839028Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e034afed-b978-406c-8e14-02e42639b7b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:02:44 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:02:44.861288651Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9d72ed8d-be5c-4301-ab6e-6b67586d9092 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:02:44 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:02:44.86176421Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=755e77d1-dd3c-4f83-be81-eb8f5be6e8fb name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:06:46 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:06:46.858492151Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c567e701-b2fb-4b62-a3bf-8128d4f788bc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:06:46 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:06:46.85935157Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=395c1fba-ceba-4657-a003-1fcbfd99abd6 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:06:46 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:06:46.859844548Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9b5407f8-f12f-4535-a39f-20b12e225082 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:06:46 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:06:46.860384524Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=0b175e55-6eed-4565-970e-a551406cb959 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:06:46 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:06:46.86087554Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=e7594db2-427a-4483-8153-1f22b1ed4da1 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:06:46 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:06:46.86134579Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=8ee220a8-6eb6-443b-957c-45d27f7d2d0d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 02:06:46 kubernetes-upgrade-813956 crio[617]: time="2025-12-17T02:06:46.861794855Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1af9dde8-f5fd-483b-8bb3-60ac75ce277e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec17 01:26] overlayfs: idmapped layers are currently not supported
	[  +3.428919] overlayfs: idmapped layers are currently not supported
	[ +34.914517] overlayfs: idmapped layers are currently not supported
	[Dec17 01:27] overlayfs: idmapped layers are currently not supported
	[Dec17 01:28] overlayfs: idmapped layers are currently not supported
	[  +3.208371] overlayfs: idmapped layers are currently not supported
	[Dec17 01:36] overlayfs: idmapped layers are currently not supported
	[Dec17 01:38] overlayfs: idmapped layers are currently not supported
	[Dec17 01:43] overlayfs: idmapped layers are currently not supported
	[ +37.335374] overlayfs: idmapped layers are currently not supported
	[Dec17 01:45] overlayfs: idmapped layers are currently not supported
	[Dec17 01:46] overlayfs: idmapped layers are currently not supported
	[Dec17 01:47] overlayfs: idmapped layers are currently not supported
	[Dec17 01:48] overlayfs: idmapped layers are currently not supported
	[Dec17 01:49] overlayfs: idmapped layers are currently not supported
	[  +7.899083] overlayfs: idmapped layers are currently not supported
	[Dec17 01:50] overlayfs: idmapped layers are currently not supported
	[ +25.041678] overlayfs: idmapped layers are currently not supported
	[Dec17 01:51] overlayfs: idmapped layers are currently not supported
	[ +26.339183] overlayfs: idmapped layers are currently not supported
	[Dec17 01:53] overlayfs: idmapped layers are currently not supported
	[Dec17 01:54] overlayfs: idmapped layers are currently not supported
	[Dec17 01:56] overlayfs: idmapped layers are currently not supported
	[Dec17 01:58] overlayfs: idmapped layers are currently not supported
	[Dec17 02:09] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:10:51 up  7:53,  0 user,  load average: 2.00, 1.55, 1.69
	Linux kubernetes-upgrade-813956 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:10:48 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:10:49 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 17 02:10:49 kubernetes-upgrade-813956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:10:49 kubernetes-upgrade-813956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:10:49 kubernetes-upgrade-813956 kubelet[12264]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 02:10:49 kubernetes-upgrade-813956 kubelet[12264]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 02:10:49 kubernetes-upgrade-813956 kubelet[12264]: E1217 02:10:49.445293   12264 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:10:49 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:10:49 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:10:50 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 17 02:10:50 kubernetes-upgrade-813956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:10:50 kubernetes-upgrade-813956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:10:50 kubernetes-upgrade-813956 kubelet[12269]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 02:10:50 kubernetes-upgrade-813956 kubelet[12269]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 02:10:50 kubernetes-upgrade-813956 kubelet[12269]: E1217 02:10:50.410031   12269 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:10:50 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:10:50 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:10:51 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 17 02:10:51 kubernetes-upgrade-813956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:10:51 kubernetes-upgrade-813956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:10:51 kubernetes-upgrade-813956 kubelet[12290]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 02:10:51 kubernetes-upgrade-813956 kubelet[12290]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 17 02:10:51 kubernetes-upgrade-813956 kubelet[12290]: E1217 02:10:51.421853   12290 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:10:51 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:10:51 kubernetes-upgrade-813956 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-813956 -n kubernetes-upgrade-813956
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-813956 -n kubernetes-upgrade-813956: exit status 2 (479.125278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-813956" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-813956" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-813956
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-813956: (2.363773933s)
--- FAIL: TestKubernetesUpgrade (794.38s)

                                                
                                    
x
+
TestPause/serial/Pause (7.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-666844 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-666844 --alsologtostderr -v=5: exit status 80 (2.391863251s)

                                                
                                                
-- stdout --
	* Pausing node pause-666844 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 02:10:24.020787 1363030 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:10:24.021567 1363030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:10:24.021588 1363030 out.go:374] Setting ErrFile to fd 2...
	I1217 02:10:24.021595 1363030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:10:24.021910 1363030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 02:10:24.022210 1363030 out.go:368] Setting JSON to false
	I1217 02:10:24.022240 1363030 mustload.go:66] Loading cluster: pause-666844
	I1217 02:10:24.023273 1363030 config.go:182] Loaded profile config "pause-666844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:10:24.024118 1363030 cli_runner.go:164] Run: docker container inspect pause-666844 --format={{.State.Status}}
	I1217 02:10:24.047188 1363030 host.go:66] Checking if "pause-666844" exists ...
	I1217 02:10:24.047548 1363030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:10:24.117533 1363030 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 02:10:24.106762841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:10:24.118168 1363030 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765846775-22141/minikube-v1.37.0-1765846775-22141-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765846775-22141-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-666844 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 02:10:24.121298 1363030 out.go:179] * Pausing node pause-666844 ... 
	I1217 02:10:24.124854 1363030 host.go:66] Checking if "pause-666844" exists ...
	I1217 02:10:24.125221 1363030 ssh_runner.go:195] Run: systemctl --version
	I1217 02:10:24.125272 1363030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:24.144143 1363030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:24.244882 1363030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:10:24.260309 1363030 pause.go:52] kubelet running: true
	I1217 02:10:24.260376 1363030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 02:10:24.483660 1363030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 02:10:24.483757 1363030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 02:10:24.577781 1363030 cri.go:89] found id: "a8bac143739eb280ee1fdb70440e2a377b17a757f9fa62235bfdd0a218a5b197"
	I1217 02:10:24.577808 1363030 cri.go:89] found id: "50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014"
	I1217 02:10:24.577814 1363030 cri.go:89] found id: "8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20"
	I1217 02:10:24.577818 1363030 cri.go:89] found id: "7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22"
	I1217 02:10:24.577821 1363030 cri.go:89] found id: "87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca"
	I1217 02:10:24.577825 1363030 cri.go:89] found id: "aa1c4d79fcd4b4864131c4466f74735487343c2ff68ba612a99354d2de6fb07e"
	I1217 02:10:24.577828 1363030 cri.go:89] found id: "6f82e6d867dd820ed5113c9de478465d128ecf972ec654f51ad7247894213a18"
	I1217 02:10:24.577831 1363030 cri.go:89] found id: "22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	I1217 02:10:24.577834 1363030 cri.go:89] found id: "7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6"
	I1217 02:10:24.577841 1363030 cri.go:89] found id: "28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120"
	I1217 02:10:24.577844 1363030 cri.go:89] found id: "8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	I1217 02:10:24.577847 1363030 cri.go:89] found id: "86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e"
	I1217 02:10:24.577850 1363030 cri.go:89] found id: "d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d"
	I1217 02:10:24.577853 1363030 cri.go:89] found id: "b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539"
	I1217 02:10:24.577856 1363030 cri.go:89] found id: ""
	I1217 02:10:24.577907 1363030 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 02:10:24.589382 1363030 retry.go:31] will retry after 258.191725ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T02:10:24Z" level=error msg="open /run/runc: no such file or directory"
	I1217 02:10:24.847773 1363030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:10:24.861585 1363030 pause.go:52] kubelet running: false
	I1217 02:10:24.861667 1363030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 02:10:25.023120 1363030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 02:10:25.023274 1363030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 02:10:25.105751 1363030 cri.go:89] found id: "a8bac143739eb280ee1fdb70440e2a377b17a757f9fa62235bfdd0a218a5b197"
	I1217 02:10:25.105780 1363030 cri.go:89] found id: "50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014"
	I1217 02:10:25.105785 1363030 cri.go:89] found id: "8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20"
	I1217 02:10:25.105791 1363030 cri.go:89] found id: "7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22"
	I1217 02:10:25.105794 1363030 cri.go:89] found id: "87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca"
	I1217 02:10:25.105798 1363030 cri.go:89] found id: "aa1c4d79fcd4b4864131c4466f74735487343c2ff68ba612a99354d2de6fb07e"
	I1217 02:10:25.105822 1363030 cri.go:89] found id: "6f82e6d867dd820ed5113c9de478465d128ecf972ec654f51ad7247894213a18"
	I1217 02:10:25.105826 1363030 cri.go:89] found id: "22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	I1217 02:10:25.105850 1363030 cri.go:89] found id: "7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6"
	I1217 02:10:25.105866 1363030 cri.go:89] found id: "28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120"
	I1217 02:10:25.105869 1363030 cri.go:89] found id: "8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	I1217 02:10:25.105873 1363030 cri.go:89] found id: "86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e"
	I1217 02:10:25.105876 1363030 cri.go:89] found id: "d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d"
	I1217 02:10:25.105880 1363030 cri.go:89] found id: "b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539"
	I1217 02:10:25.105883 1363030 cri.go:89] found id: ""
	I1217 02:10:25.105951 1363030 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 02:10:25.118476 1363030 retry.go:31] will retry after 285.495862ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T02:10:25Z" level=error msg="open /run/runc: no such file or directory"
	I1217 02:10:25.405125 1363030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:10:25.418321 1363030 pause.go:52] kubelet running: false
	I1217 02:10:25.418401 1363030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 02:10:25.558427 1363030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 02:10:25.558552 1363030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 02:10:25.631797 1363030 cri.go:89] found id: "a8bac143739eb280ee1fdb70440e2a377b17a757f9fa62235bfdd0a218a5b197"
	I1217 02:10:25.631817 1363030 cri.go:89] found id: "50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014"
	I1217 02:10:25.631822 1363030 cri.go:89] found id: "8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20"
	I1217 02:10:25.631825 1363030 cri.go:89] found id: "7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22"
	I1217 02:10:25.631829 1363030 cri.go:89] found id: "87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca"
	I1217 02:10:25.631832 1363030 cri.go:89] found id: "aa1c4d79fcd4b4864131c4466f74735487343c2ff68ba612a99354d2de6fb07e"
	I1217 02:10:25.631835 1363030 cri.go:89] found id: "6f82e6d867dd820ed5113c9de478465d128ecf972ec654f51ad7247894213a18"
	I1217 02:10:25.631838 1363030 cri.go:89] found id: "22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	I1217 02:10:25.631841 1363030 cri.go:89] found id: "7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6"
	I1217 02:10:25.631846 1363030 cri.go:89] found id: "28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120"
	I1217 02:10:25.631850 1363030 cri.go:89] found id: "8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	I1217 02:10:25.631852 1363030 cri.go:89] found id: "86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e"
	I1217 02:10:25.631855 1363030 cri.go:89] found id: "d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d"
	I1217 02:10:25.631861 1363030 cri.go:89] found id: "b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539"
	I1217 02:10:25.631864 1363030 cri.go:89] found id: ""
	I1217 02:10:25.631914 1363030 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 02:10:25.643663 1363030 retry.go:31] will retry after 460.484496ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T02:10:25Z" level=error msg="open /run/runc: no such file or directory"
	I1217 02:10:26.104345 1363030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:10:26.117658 1363030 pause.go:52] kubelet running: false
	I1217 02:10:26.117738 1363030 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 02:10:26.258057 1363030 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 02:10:26.258182 1363030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 02:10:26.327351 1363030 cri.go:89] found id: "a8bac143739eb280ee1fdb70440e2a377b17a757f9fa62235bfdd0a218a5b197"
	I1217 02:10:26.327385 1363030 cri.go:89] found id: "50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014"
	I1217 02:10:26.327391 1363030 cri.go:89] found id: "8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20"
	I1217 02:10:26.327395 1363030 cri.go:89] found id: "7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22"
	I1217 02:10:26.327398 1363030 cri.go:89] found id: "87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca"
	I1217 02:10:26.327402 1363030 cri.go:89] found id: "aa1c4d79fcd4b4864131c4466f74735487343c2ff68ba612a99354d2de6fb07e"
	I1217 02:10:26.327405 1363030 cri.go:89] found id: "6f82e6d867dd820ed5113c9de478465d128ecf972ec654f51ad7247894213a18"
	I1217 02:10:26.327409 1363030 cri.go:89] found id: "22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	I1217 02:10:26.327412 1363030 cri.go:89] found id: "7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6"
	I1217 02:10:26.327418 1363030 cri.go:89] found id: "28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120"
	I1217 02:10:26.327422 1363030 cri.go:89] found id: "8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	I1217 02:10:26.327425 1363030 cri.go:89] found id: "86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e"
	I1217 02:10:26.327428 1363030 cri.go:89] found id: "d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d"
	I1217 02:10:26.327432 1363030 cri.go:89] found id: "b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539"
	I1217 02:10:26.327435 1363030 cri.go:89] found id: ""
	I1217 02:10:26.327498 1363030 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 02:10:26.341759 1363030 out.go:203] 
	W1217 02:10:26.344605 1363030 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T02:10:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T02:10:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 02:10:26.344632 1363030 out.go:285] * 
	* 
	W1217 02:10:26.354010 1363030 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:10:26.356976 1363030 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-666844 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-666844
helpers_test.go:244: (dbg) docker inspect pause-666844:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2",
	        "Created": "2025-12-17T02:08:41.282747949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1359134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:08:41.374256021Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/hosts",
	        "LogPath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2-json.log",
	        "Name": "/pause-666844",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-666844:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-666844",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2",
	                "LowerDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-666844",
	                "Source": "/var/lib/docker/volumes/pause-666844/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-666844",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-666844",
	                "name.minikube.sigs.k8s.io": "pause-666844",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b2d5d257a5348f90ea5ee54bc3afd837bb0ab3d408bde342d89f56b5c3e9a3f",
	            "SandboxKey": "/var/run/docker/netns/6b2d5d257a53",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-666844": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:a8:91:03:07:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ba59bc309849ecfb509f3782d575a22b8cbb4862095a8294215cd2919b4761d",
	                    "EndpointID": "e707523eda5c8a01483abc584e542025936b427925e7cc9c6477a511abe2d8a3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-666844",
	                        "60fafd755088"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-666844 -n pause-666844
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-666844 -n pause-666844: exit status 2 (336.058652ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-666844 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-666844 logs -n 25: (1.430873521s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-262920 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p missing-upgrade-935345 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-935345    │ jenkins │ v1.35.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p NoKubernetes-262920 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p missing-upgrade-935345 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-935345    │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p NoKubernetes-262920 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ ssh     │ -p NoKubernetes-262920 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │                     │
	│ stop    │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p NoKubernetes-262920 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ ssh     │ -p NoKubernetes-262920 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │                     │
	│ delete  │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:58 UTC │
	│ delete  │ -p missing-upgrade-935345                                                                                                                       │ missing-upgrade-935345    │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p stopped-upgrade-925123 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-925123    │ jenkins │ v1.35.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ stop    │ -p kubernetes-upgrade-813956                                                                                                                    │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │                     │
	│ stop    │ stopped-upgrade-925123 stop                                                                                                                     │ stopped-upgrade-925123    │ jenkins │ v1.35.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p stopped-upgrade-925123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-925123    │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 02:03 UTC │
	│ delete  │ -p stopped-upgrade-925123                                                                                                                       │ stopped-upgrade-925123    │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p running-upgrade-842996 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-842996    │ jenkins │ v1.35.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:04 UTC │
	│ start   │ -p running-upgrade-842996 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-842996    │ jenkins │ v1.37.0 │ 17 Dec 25 02:04 UTC │ 17 Dec 25 02:08 UTC │
	│ delete  │ -p running-upgrade-842996                                                                                                                       │ running-upgrade-842996    │ jenkins │ v1.37.0 │ 17 Dec 25 02:08 UTC │ 17 Dec 25 02:08 UTC │
	│ start   │ -p pause-666844 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:08 UTC │ 17 Dec 25 02:09 UTC │
	│ start   │ -p pause-666844 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:09 UTC │ 17 Dec 25 02:10 UTC │
	│ pause   │ -p pause-666844 --alsologtostderr -v=5                                                                                                          │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:09:57
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:09:57.935029 1361730 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:09:57.935255 1361730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:09:57.935284 1361730 out.go:374] Setting ErrFile to fd 2...
	I1217 02:09:57.935303 1361730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:09:57.935604 1361730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 02:09:57.936033 1361730 out.go:368] Setting JSON to false
	I1217 02:09:57.937119 1361730 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28348,"bootTime":1765909050,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 02:09:57.937223 1361730 start.go:143] virtualization:  
	I1217 02:09:57.940570 1361730 out.go:179] * [pause-666844] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:09:57.944688 1361730 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:09:57.944764 1361730 notify.go:221] Checking for updates...
	I1217 02:09:57.948330 1361730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:09:57.951395 1361730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 02:09:57.954390 1361730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 02:09:57.957319 1361730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:09:57.960254 1361730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:09:57.963788 1361730 config.go:182] Loaded profile config "pause-666844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:09:57.964492 1361730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:09:57.990410 1361730 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:09:57.990548 1361730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:09:58.064660 1361730 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 02:09:58.054206714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:09:58.064779 1361730 docker.go:319] overlay module found
	I1217 02:09:58.068117 1361730 out.go:179] * Using the docker driver based on existing profile
	I1217 02:09:58.071062 1361730 start.go:309] selected driver: docker
	I1217 02:09:58.071152 1361730 start.go:927] validating driver "docker" against &{Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:09:58.071324 1361730 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:09:58.071555 1361730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:09:58.142934 1361730 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 02:09:58.133711586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:09:58.143388 1361730 cni.go:84] Creating CNI manager for ""
	I1217 02:09:58.143449 1361730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 02:09:58.143500 1361730 start.go:353] cluster config:
	{Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:09:58.146737 1361730 out.go:179] * Starting "pause-666844" primary control-plane node in "pause-666844" cluster
	I1217 02:09:58.149622 1361730 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 02:09:58.152552 1361730 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:09:58.155451 1361730 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:09:58.155485 1361730 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 02:09:58.155532 1361730 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 02:09:58.155542 1361730 cache.go:65] Caching tarball of preloaded images
	I1217 02:09:58.155626 1361730 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 02:09:58.155635 1361730 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 02:09:58.155906 1361730 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/config.json ...
	I1217 02:09:58.175683 1361730 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:09:58.175706 1361730 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:09:58.175726 1361730 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:09:58.175764 1361730 start.go:360] acquireMachinesLock for pause-666844: {Name:mk669b34aeea697cf796906dfe79ca962658ddfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:09:58.175821 1361730 start.go:364] duration metric: took 37.726µs to acquireMachinesLock for "pause-666844"
	I1217 02:09:58.175848 1361730 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:09:58.175859 1361730 fix.go:54] fixHost starting: 
	I1217 02:09:58.176134 1361730 cli_runner.go:164] Run: docker container inspect pause-666844 --format={{.State.Status}}
	I1217 02:09:58.193502 1361730 fix.go:112] recreateIfNeeded on pause-666844: state=Running err=<nil>
	W1217 02:09:58.193539 1361730 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:09:58.196745 1361730 out.go:252] * Updating the running docker "pause-666844" container ...
	I1217 02:09:58.196786 1361730 machine.go:94] provisionDockerMachine start ...
	I1217 02:09:58.196867 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:58.214323 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:58.214646 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:58.214661 1361730 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:09:58.352240 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-666844
	
	I1217 02:09:58.352266 1361730 ubuntu.go:182] provisioning hostname "pause-666844"
	I1217 02:09:58.352328 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:58.369953 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:58.370299 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:58.370315 1361730 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-666844 && echo "pause-666844" | sudo tee /etc/hostname
	I1217 02:09:58.511521 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-666844
	
	I1217 02:09:58.511644 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:58.531543 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:58.531858 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:58.531880 1361730 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-666844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-666844/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-666844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:09:58.665426 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:09:58.665452 1361730 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 02:09:58.665470 1361730 ubuntu.go:190] setting up certificates
	I1217 02:09:58.665480 1361730 provision.go:84] configureAuth start
	I1217 02:09:58.665543 1361730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-666844
	I1217 02:09:58.684007 1361730 provision.go:143] copyHostCerts
	I1217 02:09:58.684079 1361730 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 02:09:58.684094 1361730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 02:09:58.684169 1361730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 02:09:58.684277 1361730 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 02:09:58.684283 1361730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 02:09:58.684312 1361730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 02:09:58.684371 1361730 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 02:09:58.684376 1361730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 02:09:58.684403 1361730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 02:09:58.684573 1361730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.pause-666844 san=[127.0.0.1 192.168.85.2 localhost minikube pause-666844]
	I1217 02:09:59.146710 1361730 provision.go:177] copyRemoteCerts
	I1217 02:09:59.146791 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:09:59.146832 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:59.166906 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:09:59.269322 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:09:59.287890 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 02:09:59.306568 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:09:59.325658 1361730 provision.go:87] duration metric: took 660.162493ms to configureAuth
	I1217 02:09:59.325686 1361730 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:09:59.325916 1361730 config.go:182] Loaded profile config "pause-666844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:09:59.326026 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:59.344881 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:59.345214 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:59.345235 1361730 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 02:10:04.737874 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 02:10:04.737917 1361730 machine.go:97] duration metric: took 6.541120092s to provisionDockerMachine
	I1217 02:10:04.737929 1361730 start.go:293] postStartSetup for "pause-666844" (driver="docker")
	I1217 02:10:04.737957 1361730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:10:04.738037 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:10:04.738094 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:04.760261 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:04.857137 1361730 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:10:04.861090 1361730 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:10:04.861120 1361730 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:10:04.861132 1361730 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 02:10:04.861196 1361730 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 02:10:04.861315 1361730 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 02:10:04.861439 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:10:04.869828 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 02:10:04.889259 1361730 start.go:296] duration metric: took 151.297442ms for postStartSetup
	I1217 02:10:04.889341 1361730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:10:04.889388 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:04.907559 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:05.003311 1361730 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:10:05.011551 1361730 fix.go:56] duration metric: took 6.835681918s for fixHost
	I1217 02:10:05.011589 1361730 start.go:83] releasing machines lock for "pause-666844", held for 6.835751307s
	I1217 02:10:05.011676 1361730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-666844
	I1217 02:10:05.043380 1361730 ssh_runner.go:195] Run: cat /version.json
	I1217 02:10:05.043435 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:05.043698 1361730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:10:05.043744 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:05.065358 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:05.078245 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:05.279654 1361730 ssh_runner.go:195] Run: systemctl --version
	I1217 02:10:05.286588 1361730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 02:10:05.336335 1361730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:10:05.340952 1361730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:10:05.341030 1361730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:10:05.350244 1361730 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:10:05.350269 1361730 start.go:496] detecting cgroup driver to use...
	I1217 02:10:05.350301 1361730 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:10:05.350355 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:10:05.367802 1361730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:10:05.381924 1361730 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:10:05.381998 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:10:05.398409 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:10:05.412614 1361730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:10:05.561952 1361730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:10:05.697813 1361730 docker.go:234] disabling docker service ...
	I1217 02:10:05.697895 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:10:05.714211 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:10:05.728048 1361730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:10:05.897831 1361730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:10:06.038839 1361730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:10:06.054079 1361730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:10:06.070237 1361730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 02:10:06.070402 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.081868 1361730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 02:10:06.081997 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.092388 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.102495 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.113108 1361730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:10:06.122344 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.132243 1361730 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.141341 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.150770 1361730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:10:06.160049 1361730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:10:06.168786 1361730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:06.302108 1361730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 02:10:06.527519 1361730 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 02:10:06.527669 1361730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 02:10:06.532330 1361730 start.go:564] Will wait 60s for crictl version
	I1217 02:10:06.532494 1361730 ssh_runner.go:195] Run: which crictl
	I1217 02:10:06.536730 1361730 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:10:06.572068 1361730 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 02:10:06.572250 1361730 ssh_runner.go:195] Run: crio --version
	I1217 02:10:06.601522 1361730 ssh_runner.go:195] Run: crio --version
	I1217 02:10:06.636880 1361730 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 02:10:06.639936 1361730 cli_runner.go:164] Run: docker network inspect pause-666844 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:10:06.656119 1361730 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:10:06.660206 1361730 kubeadm.go:884] updating cluster {Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:10:06.660358 1361730 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 02:10:06.660411 1361730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:10:06.693119 1361730 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 02:10:06.693146 1361730 crio.go:433] Images already preloaded, skipping extraction
	I1217 02:10:06.693205 1361730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:10:06.720638 1361730 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 02:10:06.720662 1361730 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:10:06.720670 1361730 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 02:10:06.720777 1361730 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-666844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:10:06.720856 1361730 ssh_runner.go:195] Run: crio config
	I1217 02:10:06.787049 1361730 cni.go:84] Creating CNI manager for ""
	I1217 02:10:06.787122 1361730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 02:10:06.787158 1361730 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:10:06.787214 1361730 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-666844 NodeName:pause-666844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:10:06.787382 1361730 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-666844"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:10:06.787501 1361730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 02:10:06.795844 1361730 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:10:06.795964 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:10:06.803797 1361730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 02:10:06.817075 1361730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 02:10:06.831844 1361730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 02:10:06.845499 1361730 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:10:06.849260 1361730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:06.987965 1361730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:10:07.001897 1361730 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844 for IP: 192.168.85.2
	I1217 02:10:07.001984 1361730 certs.go:195] generating shared ca certs ...
	I1217 02:10:07.002017 1361730 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:07.002230 1361730 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 02:10:07.002308 1361730 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 02:10:07.002346 1361730 certs.go:257] generating profile certs ...
	I1217 02:10:07.002486 1361730 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.key
	I1217 02:10:07.002622 1361730 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/apiserver.key.a9797934
	I1217 02:10:07.002699 1361730 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/proxy-client.key
	I1217 02:10:07.002852 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 02:10:07.002916 1361730 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 02:10:07.002941 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:10:07.003002 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:10:07.003057 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:10:07.003125 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 02:10:07.003264 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 02:10:07.004037 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:10:07.025194 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:10:07.044282 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:10:07.063133 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:10:07.082596 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 02:10:07.102325 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:10:07.123369 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:10:07.143113 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:10:07.162055 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 02:10:07.180849 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:10:07.199379 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 02:10:07.218010 1361730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:10:07.231839 1361730 ssh_runner.go:195] Run: openssl version
	I1217 02:10:07.238442 1361730 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.246630 1361730 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:10:07.260208 1361730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.266620 1361730 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.266693 1361730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.309502 1361730 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:10:07.317884 1361730 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.326344 1361730 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 02:10:07.335596 1361730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.339978 1361730 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.340056 1361730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.393657 1361730 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:10:07.402741 1361730 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.410733 1361730 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 02:10:07.418810 1361730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.423165 1361730 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.423296 1361730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.466143 1361730 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:10:07.474073 1361730 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:10:07.477958 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:10:07.519488 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:10:07.560825 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:10:07.603304 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:10:07.644963 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:10:07.686630 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:10:07.727673 1361730 kubeadm.go:401] StartCluster: {Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:10:07.727799 1361730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 02:10:07.727872 1361730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:10:07.760648 1361730 cri.go:89] found id: "22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	I1217 02:10:07.760669 1361730 cri.go:89] found id: "7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6"
	I1217 02:10:07.760673 1361730 cri.go:89] found id: "28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120"
	I1217 02:10:07.760677 1361730 cri.go:89] found id: "8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	I1217 02:10:07.760680 1361730 cri.go:89] found id: "86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e"
	I1217 02:10:07.760683 1361730 cri.go:89] found id: "d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d"
	I1217 02:10:07.760687 1361730 cri.go:89] found id: "b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539"
	I1217 02:10:07.760690 1361730 cri.go:89] found id: ""
	I1217 02:10:07.760740 1361730 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 02:10:07.773143 1361730 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T02:10:07Z" level=error msg="open /run/runc: no such file or directory"
	I1217 02:10:07.773226 1361730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:10:07.781544 1361730 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:10:07.781565 1361730 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:10:07.781620 1361730 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:10:07.789118 1361730 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:10:07.790162 1361730 kubeconfig.go:125] found "pause-666844" server: "https://192.168.85.2:8443"
	I1217 02:10:07.790989 1361730 kapi.go:59] client config for pause-666844: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 02:10:07.791500 1361730 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 02:10:07.791519 1361730 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 02:10:07.791525 1361730 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 02:10:07.791533 1361730 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 02:10:07.791544 1361730 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 02:10:07.791836 1361730 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:10:07.801544 1361730 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 02:10:07.801586 1361730 kubeadm.go:602] duration metric: took 20.013811ms to restartPrimaryControlPlane
	I1217 02:10:07.801596 1361730 kubeadm.go:403] duration metric: took 73.933511ms to StartCluster
	I1217 02:10:07.801616 1361730 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:07.801693 1361730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 02:10:07.802621 1361730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:07.802867 1361730 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 02:10:07.803183 1361730 config.go:182] Loaded profile config "pause-666844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:10:07.803244 1361730 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:10:07.807506 1361730 out.go:179] * Verifying Kubernetes components...
	I1217 02:10:07.807546 1361730 out.go:179] * Enabled addons: 
	I1217 02:10:07.811325 1361730 addons.go:530] duration metric: took 8.072508ms for enable addons: enabled=[]
	I1217 02:10:07.811408 1361730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:08.132169 1361730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:10:08.176517 1361730 node_ready.go:35] waiting up to 6m0s for node "pause-666844" to be "Ready" ...
	I1217 02:10:12.432442 1361730 node_ready.go:49] node "pause-666844" is "Ready"
	I1217 02:10:12.432475 1361730 node_ready.go:38] duration metric: took 4.255926663s for node "pause-666844" to be "Ready" ...
	I1217 02:10:12.432489 1361730 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:10:12.432559 1361730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:12.449023 1361730 api_server.go:72] duration metric: took 4.646110339s to wait for apiserver process to appear ...
	I1217 02:10:12.449060 1361730 api_server.go:88] waiting for apiserver healthz status ...
	I1217 02:10:12.449080 1361730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 02:10:12.469901 1361730 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 02:10:12.469933 1361730 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 02:10:12.949619 1361730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 02:10:12.958759 1361730 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 02:10:12.958808 1361730 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 02:10:13.449212 1361730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 02:10:13.461397 1361730 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 02:10:13.463739 1361730 api_server.go:141] control plane version: v1.34.2
	I1217 02:10:13.463776 1361730 api_server.go:131] duration metric: took 1.014710424s to wait for apiserver health ...
	I1217 02:10:13.463786 1361730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 02:10:13.470811 1361730 system_pods.go:59] 7 kube-system pods found
	I1217 02:10:13.470850 1361730 system_pods.go:61] "coredns-66bc5c9577-gqldk" [6e2209e5-3adb-4599-9018-3a91a74eca37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:10:13.470862 1361730 system_pods.go:61] "etcd-pause-666844" [36db6b8d-2628-48ab-9216-34b2724d6d3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 02:10:13.470867 1361730 system_pods.go:61] "kindnet-vpl6h" [adc49a53-a6af-4834-b7ac-000e9c04c4eb] Running
	I1217 02:10:13.470882 1361730 system_pods.go:61] "kube-apiserver-pause-666844" [19918a3f-12a2-4d78-a98f-f89a92913c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:10:13.470895 1361730 system_pods.go:61] "kube-controller-manager-pause-666844" [55dd406c-002d-47f9-9181-185b093afd5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:10:13.470899 1361730 system_pods.go:61] "kube-proxy-ntp5b" [438c1e2b-7d5c-4d99-9143-b6f4169c2015] Running
	I1217 02:10:13.470905 1361730 system_pods.go:61] "kube-scheduler-pause-666844" [152b95ac-8cd4-487d-a1c2-3d99601960ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:10:13.470911 1361730 system_pods.go:74] duration metric: took 7.118551ms to wait for pod list to return data ...
	I1217 02:10:13.470924 1361730 default_sa.go:34] waiting for default service account to be created ...
	I1217 02:10:13.473063 1361730 default_sa.go:45] found service account: "default"
	I1217 02:10:13.473091 1361730 default_sa.go:55] duration metric: took 2.159968ms for default service account to be created ...
	I1217 02:10:13.473113 1361730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 02:10:13.475867 1361730 system_pods.go:86] 7 kube-system pods found
	I1217 02:10:13.475900 1361730 system_pods.go:89] "coredns-66bc5c9577-gqldk" [6e2209e5-3adb-4599-9018-3a91a74eca37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:10:13.475909 1361730 system_pods.go:89] "etcd-pause-666844" [36db6b8d-2628-48ab-9216-34b2724d6d3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 02:10:13.475924 1361730 system_pods.go:89] "kindnet-vpl6h" [adc49a53-a6af-4834-b7ac-000e9c04c4eb] Running
	I1217 02:10:13.475933 1361730 system_pods.go:89] "kube-apiserver-pause-666844" [19918a3f-12a2-4d78-a98f-f89a92913c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:10:13.475941 1361730 system_pods.go:89] "kube-controller-manager-pause-666844" [55dd406c-002d-47f9-9181-185b093afd5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:10:13.475948 1361730 system_pods.go:89] "kube-proxy-ntp5b" [438c1e2b-7d5c-4d99-9143-b6f4169c2015] Running
	I1217 02:10:13.475954 1361730 system_pods.go:89] "kube-scheduler-pause-666844" [152b95ac-8cd4-487d-a1c2-3d99601960ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:10:13.475967 1361730 system_pods.go:126] duration metric: took 2.844892ms to wait for k8s-apps to be running ...
	I1217 02:10:13.475976 1361730 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 02:10:13.476045 1361730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:10:13.490293 1361730 system_svc.go:56] duration metric: took 14.307347ms WaitForService to wait for kubelet
	I1217 02:10:13.490328 1361730 kubeadm.go:587] duration metric: took 5.687428814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:10:13.490344 1361730 node_conditions.go:102] verifying NodePressure condition ...
	I1217 02:10:13.498412 1361730 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 02:10:13.498447 1361730 node_conditions.go:123] node cpu capacity is 2
	I1217 02:10:13.498460 1361730 node_conditions.go:105] duration metric: took 8.110505ms to run NodePressure ...
	I1217 02:10:13.498474 1361730 start.go:242] waiting for startup goroutines ...
	I1217 02:10:13.498482 1361730 start.go:247] waiting for cluster config update ...
	I1217 02:10:13.498490 1361730 start.go:256] writing updated cluster config ...
	I1217 02:10:13.498801 1361730 ssh_runner.go:195] Run: rm -f paused
	I1217 02:10:13.502849 1361730 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 02:10:13.503557 1361730 kapi.go:59] client config for pause-666844: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 02:10:13.567672 1361730 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqldk" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 02:10:15.575223 1361730 pod_ready.go:104] pod "coredns-66bc5c9577-gqldk" is not "Ready", error: <nil>
	W1217 02:10:18.074327 1361730 pod_ready.go:104] pod "coredns-66bc5c9577-gqldk" is not "Ready", error: <nil>
	I1217 02:10:20.085296 1361730 pod_ready.go:94] pod "coredns-66bc5c9577-gqldk" is "Ready"
	I1217 02:10:20.085325 1361730 pod_ready.go:86] duration metric: took 6.517624843s for pod "coredns-66bc5c9577-gqldk" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:20.091185 1361730 pod_ready.go:83] waiting for pod "etcd-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:20.105233 1361730 pod_ready.go:94] pod "etcd-pause-666844" is "Ready"
	I1217 02:10:20.105276 1361730 pod_ready.go:86] duration metric: took 14.057588ms for pod "etcd-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:20.190774 1361730 pod_ready.go:83] waiting for pod "kube-apiserver-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 02:10:22.196848 1361730 pod_ready.go:104] pod "kube-apiserver-pause-666844" is not "Ready", error: <nil>
	I1217 02:10:23.196454 1361730 pod_ready.go:94] pod "kube-apiserver-pause-666844" is "Ready"
	I1217 02:10:23.196483 1361730 pod_ready.go:86] duration metric: took 3.005679219s for pod "kube-apiserver-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.198749 1361730 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.203323 1361730 pod_ready.go:94] pod "kube-controller-manager-pause-666844" is "Ready"
	I1217 02:10:23.203350 1361730 pod_ready.go:86] duration metric: took 4.571236ms for pod "kube-controller-manager-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.205723 1361730 pod_ready.go:83] waiting for pod "kube-proxy-ntp5b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.271497 1361730 pod_ready.go:94] pod "kube-proxy-ntp5b" is "Ready"
	I1217 02:10:23.271527 1361730 pod_ready.go:86] duration metric: took 65.775334ms for pod "kube-proxy-ntp5b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.470706 1361730 pod_ready.go:83] waiting for pod "kube-scheduler-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.870754 1361730 pod_ready.go:94] pod "kube-scheduler-pause-666844" is "Ready"
	I1217 02:10:23.870784 1361730 pod_ready.go:86] duration metric: took 400.006426ms for pod "kube-scheduler-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.870797 1361730 pod_ready.go:40] duration metric: took 10.36786422s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 02:10:23.925798 1361730 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1217 02:10:23.930956 1361730 out.go:179] * Done! kubectl is now configured to use "pause-666844" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.161354807Z" level=info msg="Starting container: 87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca" id=70d79866-cc35-47e7-938e-5b1866e36c33 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.170970526Z" level=info msg="Starting container: 50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014" id=aa785bf9-cab7-40a3-9737-4278d8d54e41 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.172177005Z" level=info msg="Started container" PID=2380 containerID=87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca description=kube-system/etcd-pause-666844/etcd id=70d79866-cc35-47e7-938e-5b1866e36c33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b84534a4ad51ef27952b6ed1b91327e0159e6c36d672206b5dcd61f08ff9f906
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.17709998Z" level=info msg="Created container 7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22: kube-system/kube-proxy-ntp5b/kube-proxy" id=685c2169-2231-4146-8215-da16180b0b6d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.179381709Z" level=info msg="Starting container: 7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22" id=b404a05a-4eec-44f7-9e50-649ef47dd949 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.179802417Z" level=info msg="Started container" PID=2395 containerID=50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014 description=kube-system/kindnet-vpl6h/kindnet-cni id=aa785bf9-cab7-40a3-9737-4278d8d54e41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=323e783cf9239a72f6df7ab888c2419ef474de877c35e95e66cee7c4dd3da85a
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.182507144Z" level=info msg="Created container 8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20: kube-system/coredns-66bc5c9577-gqldk/coredns" id=a4ec8074-cffb-4473-b472-9c1732fd974c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.186789912Z" level=info msg="Starting container: 8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20" id=6c373d78-f486-4acd-a8ec-548f6c394b3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.203431884Z" level=info msg="Started container" PID=2390 containerID=8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20 description=kube-system/coredns-66bc5c9577-gqldk/coredns id=6c373d78-f486-4acd-a8ec-548f6c394b3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe4358fd31072ef655959c267c24fdf9cad376458f24ea94a0af0f5759a0c779
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.204580199Z" level=info msg="Started container" PID=2376 containerID=7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22 description=kube-system/kube-proxy-ntp5b/kube-proxy id=b404a05a-4eec-44f7-9e50-649ef47dd949 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a644de56145fc1eff2e26d5275a49a21ab9f70fc2388173dc65bfd4831643968
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.539132929Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.543208809Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.543433772Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.543527587Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.547468315Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.547667506Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.547764612Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.552262759Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.552485105Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.552574867Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.56672494Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.566901618Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.566940345Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.576373799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.576464381Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a8bac143739eb       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   19 seconds ago       Running             kube-controller-manager   1                   f808e4259afae       kube-controller-manager-pause-666844   kube-system
	50e0d7ab5a6b2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   19 seconds ago       Running             kindnet-cni               1                   323e783cf9239       kindnet-vpl6h                          kube-system
	8f019167b6583       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   19 seconds ago       Running             coredns                   1                   fe4358fd31072       coredns-66bc5c9577-gqldk               kube-system
	7cde0d1f85171       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   19 seconds ago       Running             kube-proxy                1                   a644de56145fc       kube-proxy-ntp5b                       kube-system
	87ccbfd65324b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   19 seconds ago       Running             etcd                      1                   b84534a4ad51e       etcd-pause-666844                      kube-system
	aa1c4d79fcd4b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   19 seconds ago       Running             kube-apiserver            1                   0ffd8976f7ebc       kube-apiserver-pause-666844            kube-system
	6f82e6d867dd8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   19 seconds ago       Running             kube-scheduler            1                   9a056da516d10       kube-scheduler-pause-666844            kube-system
	22741cbb922b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   31 seconds ago       Exited              coredns                   0                   fe4358fd31072       coredns-66bc5c9577-gqldk               kube-system
	7edaa15b6668a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   a644de56145fc       kube-proxy-ntp5b                       kube-system
	28a59b52ed064       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   323e783cf9239       kindnet-vpl6h                          kube-system
	8e601778d0498       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   f808e4259afae       kube-controller-manager-pause-666844   kube-system
	86aa82d55ac5e       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   0ffd8976f7ebc       kube-apiserver-pause-666844            kube-system
	d13ff1a752ca5       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   9a056da516d10       kube-scheduler-pause-666844            kube-system
	b49736361e655       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   b84534a4ad51e       etcd-pause-666844                      kube-system
	
	
	==> coredns [22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47210 - 8848 "HINFO IN 1666532456141977913.4593343254278862335. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02337641s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40299 - 17999 "HINFO IN 6297234418200880835.510038069702975500. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00382622s
	
	
	==> describe nodes <==
	Name:               pause-666844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-666844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=pause-666844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T02_09_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 02:09:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-666844
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 02:10:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-666844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                55884fe1-5afc-4bf9-9b85-732808c50909
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gqldk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     73s
	  kube-system                 etcd-pause-666844                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-vpl6h                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-pause-666844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-666844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-ntp5b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-666844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 72s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientPID     86s (x8 over 86s)  kubelet          Node pause-666844 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 86s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node pause-666844 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node pause-666844 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Normal   Starting                 79s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 79s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s                kubelet          Node pause-666844 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s                kubelet          Node pause-666844 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s                kubelet          Node pause-666844 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s                node-controller  Node pause-666844 event: Registered Node pause-666844 in Controller
	  Normal   NodeReady                32s                kubelet          Node pause-666844 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-666844 event: Registered Node pause-666844 in Controller
	
	
	==> dmesg <==
	[Dec17 01:26] overlayfs: idmapped layers are currently not supported
	[  +3.428919] overlayfs: idmapped layers are currently not supported
	[ +34.914517] overlayfs: idmapped layers are currently not supported
	[Dec17 01:27] overlayfs: idmapped layers are currently not supported
	[Dec17 01:28] overlayfs: idmapped layers are currently not supported
	[  +3.208371] overlayfs: idmapped layers are currently not supported
	[Dec17 01:36] overlayfs: idmapped layers are currently not supported
	[Dec17 01:38] overlayfs: idmapped layers are currently not supported
	[Dec17 01:43] overlayfs: idmapped layers are currently not supported
	[ +37.335374] overlayfs: idmapped layers are currently not supported
	[Dec17 01:45] overlayfs: idmapped layers are currently not supported
	[Dec17 01:46] overlayfs: idmapped layers are currently not supported
	[Dec17 01:47] overlayfs: idmapped layers are currently not supported
	[Dec17 01:48] overlayfs: idmapped layers are currently not supported
	[Dec17 01:49] overlayfs: idmapped layers are currently not supported
	[  +7.899083] overlayfs: idmapped layers are currently not supported
	[Dec17 01:50] overlayfs: idmapped layers are currently not supported
	[ +25.041678] overlayfs: idmapped layers are currently not supported
	[Dec17 01:51] overlayfs: idmapped layers are currently not supported
	[ +26.339183] overlayfs: idmapped layers are currently not supported
	[Dec17 01:53] overlayfs: idmapped layers are currently not supported
	[Dec17 01:54] overlayfs: idmapped layers are currently not supported
	[Dec17 01:56] overlayfs: idmapped layers are currently not supported
	[Dec17 01:58] overlayfs: idmapped layers are currently not supported
	[Dec17 02:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca] <==
	{"level":"warn","ts":"2025-12-17T02:10:10.674243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.696372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.735007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.751486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.776899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.794483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.871211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.883914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.907789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.948885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.968472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.992955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.016603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.039423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.097664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.127974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.159904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.184731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.212532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.225037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.242238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.272138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.289723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.301584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.408569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60490","server-name":"","error":"EOF"}
	
	
	==> etcd [b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539] <==
	{"level":"warn","ts":"2025-12-17T02:09:04.969040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:04.992920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.025599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.062703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.085844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.102232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.153042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T02:09:59.508660Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T02:09:59.508707Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-666844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-17T02:09:59.508805Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T02:09:59.649043Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T02:09:59.650543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.650597Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650643Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T02:09:59.650652Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.650661Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T02:09:59.650671Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650708Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T02:09:59.650727Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.654097Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-17T02:09:59.654190Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.654224Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T02:09:59.654232Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-666844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 02:10:27 up  7:52,  0 user,  load average: 1.58, 1.44, 1.66
	Linux pause-666844 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120] <==
	I1217 02:09:14.626669       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 02:09:14.627086       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 02:09:14.627246       1 main.go:148] setting mtu 1500 for CNI 
	I1217 02:09:14.627258       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 02:09:14.627271       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T02:09:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 02:09:14.831805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 02:09:14.831898       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 02:09:14.831931       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 02:09:14.832747       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1217 02:09:44.832479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1217 02:09:44.832504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1217 02:09:44.832600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1217 02:09:44.832685       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1217 02:09:46.432147       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 02:09:46.432247       1 metrics.go:72] Registering metrics
	I1217 02:09:46.432398       1 controller.go:711] "Syncing nftables rules"
	I1217 02:09:54.831447       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 02:09:54.831515       1 main.go:301] handling current node
	
	
	==> kindnet [50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014] <==
	I1217 02:10:08.336462       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 02:10:08.339752       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 02:10:08.339966       1 main.go:148] setting mtu 1500 for CNI 
	I1217 02:10:08.340008       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 02:10:08.340051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T02:10:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 02:10:08.545131       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 02:10:08.545232       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 02:10:08.545268       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 02:10:08.545732       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 02:10:12.446058       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 02:10:12.446166       1 metrics.go:72] Registering metrics
	I1217 02:10:12.446277       1 controller.go:711] "Syncing nftables rules"
	I1217 02:10:18.538661       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 02:10:18.538798       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e] <==
	W1217 02:09:59.530575       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.530642       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533736       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533829       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533876       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533932       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533987       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534043       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534098       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534154       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534210       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534262       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535300       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535372       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535414       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535462       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535516       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535562       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535614       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536071       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536129       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536179       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536561       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536634       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536699       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [aa1c4d79fcd4b4864131c4466f74735487343c2ff68ba612a99354d2de6fb07e] <==
	I1217 02:10:12.440181       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 02:10:12.450049       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 02:10:12.450163       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 02:10:12.452829       1 aggregator.go:171] initial CRD sync complete...
	I1217 02:10:12.452921       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 02:10:12.452952       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 02:10:12.452981       1 cache.go:39] Caches are synced for autoregister controller
	I1217 02:10:12.450468       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1217 02:10:12.453207       1 policy_source.go:240] refreshing policies
	I1217 02:10:12.460177       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 02:10:12.460215       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 02:10:12.460637       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 02:10:12.460770       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 02:10:12.461147       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 02:10:12.462237       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 02:10:12.467475       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1217 02:10:12.489982       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 02:10:12.551208       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 02:10:12.551337       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 02:10:13.142470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 02:10:13.435391       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 02:10:14.974230       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 02:10:15.039914       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 02:10:15.174260       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 02:10:15.226852       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7] <==
	I1217 02:09:12.877866       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 02:09:12.877894       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 02:09:12.878069       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 02:09:12.881783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 02:09:12.882973       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 02:09:12.887885       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-666844" podCIDRs=["10.244.0.0/24"]
	I1217 02:09:12.893398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 02:09:12.896492       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 02:09:12.915024       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:09:12.916235       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 02:09:12.919711       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 02:09:12.919828       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 02:09:12.920783       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 02:09:12.920900       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 02:09:12.920913       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 02:09:12.921141       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 02:09:12.920921       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 02:09:12.923398       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 02:09:12.924937       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:09:12.924967       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 02:09:12.924983       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 02:09:12.926108       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 02:09:12.936551       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 02:09:12.938332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 02:09:57.860132       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a8bac143739eb280ee1fdb70440e2a377b17a757f9fa62235bfdd0a218a5b197] <==
	I1217 02:10:14.821634       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 02:10:14.821643       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 02:10:14.821651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 02:10:14.828679       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 02:10:14.828934       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 02:10:14.830042       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:10:14.830068       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 02:10:14.830074       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 02:10:14.833632       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 02:10:14.836508       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 02:10:14.853758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:10:14.867618       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 02:10:14.867686       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 02:10:14.867617       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 02:10:14.867732       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 02:10:14.867829       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 02:10:14.868005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 02:10:14.868105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-666844"
	I1217 02:10:14.868196       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 02:10:14.878580       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 02:10:14.879665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 02:10:14.883987       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 02:10:14.887143       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 02:10:14.899578       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 02:10:14.916065       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22] <==
	I1217 02:10:11.245034       1 server_linux.go:53] "Using iptables proxy"
	I1217 02:10:12.061389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 02:10:12.564493       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 02:10:12.564537       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 02:10:12.564604       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 02:10:12.808589       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 02:10:12.808649       1 server_linux.go:132] "Using iptables Proxier"
	I1217 02:10:12.824011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 02:10:12.824407       1 server.go:527] "Version info" version="v1.34.2"
	I1217 02:10:12.834928       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 02:10:12.836534       1 config.go:200] "Starting service config controller"
	I1217 02:10:12.836555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 02:10:12.836573       1 config.go:106] "Starting endpoint slice config controller"
	I1217 02:10:12.836584       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 02:10:12.836596       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 02:10:12.836601       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 02:10:12.837363       1 config.go:309] "Starting node config controller"
	I1217 02:10:12.837381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 02:10:12.837388       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 02:10:12.937678       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 02:10:12.937778       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 02:10:12.937809       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6] <==
	I1217 02:09:14.642957       1 server_linux.go:53] "Using iptables proxy"
	I1217 02:09:14.819221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 02:09:14.920040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 02:09:14.920152       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 02:09:14.920252       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 02:09:14.942615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 02:09:14.942666       1 server_linux.go:132] "Using iptables Proxier"
	I1217 02:09:14.946779       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 02:09:14.947110       1 server.go:527] "Version info" version="v1.34.2"
	I1217 02:09:14.947135       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 02:09:14.948842       1 config.go:200] "Starting service config controller"
	I1217 02:09:14.948936       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 02:09:14.949004       1 config.go:106] "Starting endpoint slice config controller"
	I1217 02:09:14.949041       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 02:09:14.949080       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 02:09:14.949116       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 02:09:14.949824       1 config.go:309] "Starting node config controller"
	I1217 02:09:14.949888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 02:09:14.949922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 02:09:15.049849       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 02:09:15.049901       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 02:09:15.049971       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6f82e6d867dd820ed5113c9de478465d128ecf972ec654f51ad7247894213a18] <==
	I1217 02:10:10.648306       1 serving.go:386] Generated self-signed cert in-memory
	I1217 02:10:13.189272       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1217 02:10:13.189438       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 02:10:13.197532       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 02:10:13.197595       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 02:10:13.197636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:10:13.197656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:10:13.197673       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 02:10:13.197691       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 02:10:13.200834       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 02:10:13.201000       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 02:10:13.297891       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 02:10:13.298020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 02:10:13.298143       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d] <==
	E1217 02:09:05.889785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 02:09:05.889935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 02:09:05.890018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 02:09:05.890123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 02:09:06.704618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 02:09:06.749336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 02:09:06.779746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 02:09:06.794892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 02:09:06.812001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 02:09:06.859404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 02:09:06.894512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 02:09:07.014682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 02:09:07.025902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 02:09:07.035683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 02:09:07.075715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 02:09:07.106633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 02:09:07.141282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 02:09:07.148266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1217 02:09:09.672038       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:09:59.505376       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 02:09:59.505406       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 02:09:59.505431       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 02:09:59.505456       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:09:59.505656       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 02:09:59.505672       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.896191    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.901175    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe6fc7a1d6b736f49e590712a8cb629c" pod="kube-system/etcd-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: I1217 02:10:07.949040    1327 scope.go:117] "RemoveContainer" containerID="22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.949649    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ntp5b\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="438c1e2b-7d5c-4d99-9143-b6f4169c2015" pod="kube-system/kube-proxy-ntp5b"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.949842    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gqldk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6e2209e5-3adb-4599-9018-3a91a74eca37" pod="kube-system/coredns-66bc5c9577-gqldk"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950003    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950158    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe6fc7a1d6b736f49e590712a8cb629c" pod="kube-system/etcd-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950320    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2f7dd8b69b48534f8c8e38a462218518" pod="kube-system/kube-scheduler-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950481    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-vpl6h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="adc49a53-a6af-4834-b7ac-000e9c04c4eb" pod="kube-system/kindnet-vpl6h"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: I1217 02:10:07.995372    1327 scope.go:117] "RemoveContainer" containerID="8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996007    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-vpl6h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="adc49a53-a6af-4834-b7ac-000e9c04c4eb" pod="kube-system/kindnet-vpl6h"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996186    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ntp5b\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="438c1e2b-7d5c-4d99-9143-b6f4169c2015" pod="kube-system/kube-proxy-ntp5b"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996343    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gqldk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6e2209e5-3adb-4599-9018-3a91a74eca37" pod="kube-system/coredns-66bc5c9577-gqldk"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996690    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.997013    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe6fc7a1d6b736f49e590712a8cb629c" pod="kube-system/etcd-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.997193    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07672ffe58e59d0679acf1c2e5f2c41e" pod="kube-system/kube-controller-manager-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.997344    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2f7dd8b69b48534f8c8e38a462218518" pod="kube-system/kube-scheduler-pause-666844"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.413623    1327 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-666844\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.414237    1327 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-666844\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.414690    1327 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-666844\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.415105    1327 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-666844\" is forbidden: User \"system:node:pause-666844\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:18 pause-666844 kubelet[1327]: W1217 02:10:18.916117    1327 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 17 02:10:24 pause-666844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 02:10:24 pause-666844 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 02:10:24 pause-666844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-666844 -n pause-666844
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-666844 -n pause-666844: exit status 2 (392.326974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-666844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-666844
helpers_test.go:244: (dbg) docker inspect pause-666844:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2",
	        "Created": "2025-12-17T02:08:41.282747949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1359134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:08:41.374256021Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
	        "ResolvConfPath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/hosts",
	        "LogPath": "/var/lib/docker/containers/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2/60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2-json.log",
	        "Name": "/pause-666844",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-666844:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-666844",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "60fafd7550880eec717de196bef79fe30d356123778699061cef9f830dfbc4b2",
	                "LowerDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881-init/diff:/var/lib/docker/overlay2/21f145f1a5d49f54aaa01bd0dd6193b94ff18b280464ab5d785ce478cdab9c10/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8e22ecf04f322caf676ccfc88acd75a059ea6cf0b5b4d2d8752323c422da881/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-666844",
	                "Source": "/var/lib/docker/volumes/pause-666844/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-666844",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-666844",
	                "name.minikube.sigs.k8s.io": "pause-666844",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b2d5d257a5348f90ea5ee54bc3afd837bb0ab3d408bde342d89f56b5c3e9a3f",
	            "SandboxKey": "/var/run/docker/netns/6b2d5d257a53",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-666844": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:a8:91:03:07:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ba59bc309849ecfb509f3782d575a22b8cbb4862095a8294215cd2919b4761d",
	                    "EndpointID": "e707523eda5c8a01483abc584e542025936b427925e7cc9c6477a511abe2d8a3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-666844",
	                        "60fafd755088"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-666844 -n pause-666844
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-666844 -n pause-666844: exit status 2 (369.610431ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-666844 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-666844 logs -n 25: (1.452119039s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-262920 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p missing-upgrade-935345 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-935345    │ jenkins │ v1.35.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p NoKubernetes-262920 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p missing-upgrade-935345 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-935345    │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p NoKubernetes-262920 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ ssh     │ -p NoKubernetes-262920 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │                     │
	│ stop    │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p NoKubernetes-262920 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ ssh     │ -p NoKubernetes-262920 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │                     │
	│ delete  │ -p NoKubernetes-262920                                                                                                                          │ NoKubernetes-262920       │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ start   │ -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:58 UTC │
	│ delete  │ -p missing-upgrade-935345                                                                                                                       │ missing-upgrade-935345    │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p stopped-upgrade-925123 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-925123    │ jenkins │ v1.35.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ stop    │ -p kubernetes-upgrade-813956                                                                                                                    │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p kubernetes-upgrade-813956 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-813956 │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │                     │
	│ stop    │ stopped-upgrade-925123 stop                                                                                                                     │ stopped-upgrade-925123    │ jenkins │ v1.35.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 01:58 UTC │
	│ start   │ -p stopped-upgrade-925123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-925123    │ jenkins │ v1.37.0 │ 17 Dec 25 01:58 UTC │ 17 Dec 25 02:03 UTC │
	│ delete  │ -p stopped-upgrade-925123                                                                                                                       │ stopped-upgrade-925123    │ jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:03 UTC │
	│ start   │ -p running-upgrade-842996 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-842996    │ jenkins │ v1.35.0 │ 17 Dec 25 02:03 UTC │ 17 Dec 25 02:04 UTC │
	│ start   │ -p running-upgrade-842996 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-842996    │ jenkins │ v1.37.0 │ 17 Dec 25 02:04 UTC │ 17 Dec 25 02:08 UTC │
	│ delete  │ -p running-upgrade-842996                                                                                                                       │ running-upgrade-842996    │ jenkins │ v1.37.0 │ 17 Dec 25 02:08 UTC │ 17 Dec 25 02:08 UTC │
	│ start   │ -p pause-666844 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:08 UTC │ 17 Dec 25 02:09 UTC │
	│ start   │ -p pause-666844 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:09 UTC │ 17 Dec 25 02:10 UTC │
	│ pause   │ -p pause-666844 --alsologtostderr -v=5                                                                                                          │ pause-666844              │ jenkins │ v1.37.0 │ 17 Dec 25 02:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:09:57
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:09:57.935029 1361730 out.go:360] Setting OutFile to fd 1 ...
	I1217 02:09:57.935255 1361730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:09:57.935284 1361730 out.go:374] Setting ErrFile to fd 2...
	I1217 02:09:57.935303 1361730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:09:57.935604 1361730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 02:09:57.936033 1361730 out.go:368] Setting JSON to false
	I1217 02:09:57.937119 1361730 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":28348,"bootTime":1765909050,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 02:09:57.937223 1361730 start.go:143] virtualization:  
	I1217 02:09:57.940570 1361730 out.go:179] * [pause-666844] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 02:09:57.944688 1361730 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:09:57.944764 1361730 notify.go:221] Checking for updates...
	I1217 02:09:57.948330 1361730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:09:57.951395 1361730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 02:09:57.954390 1361730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 02:09:57.957319 1361730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 02:09:57.960254 1361730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:09:57.963788 1361730 config.go:182] Loaded profile config "pause-666844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:09:57.964492 1361730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:09:57.990410 1361730 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 02:09:57.990548 1361730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:09:58.064660 1361730 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 02:09:58.054206714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:09:58.064779 1361730 docker.go:319] overlay module found
	I1217 02:09:58.068117 1361730 out.go:179] * Using the docker driver based on existing profile
	I1217 02:09:58.071062 1361730 start.go:309] selected driver: docker
	I1217 02:09:58.071152 1361730 start.go:927] validating driver "docker" against &{Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:09:58.071324 1361730 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:09:58.071555 1361730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:09:58.142934 1361730 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 02:09:58.133711586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 02:09:58.143388 1361730 cni.go:84] Creating CNI manager for ""
	I1217 02:09:58.143449 1361730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 02:09:58.143500 1361730 start.go:353] cluster config:
	{Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:09:58.146737 1361730 out.go:179] * Starting "pause-666844" primary control-plane node in "pause-666844" cluster
	I1217 02:09:58.149622 1361730 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 02:09:58.152552 1361730 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:09:58.155451 1361730 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:09:58.155485 1361730 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 02:09:58.155532 1361730 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 02:09:58.155542 1361730 cache.go:65] Caching tarball of preloaded images
	I1217 02:09:58.155626 1361730 preload.go:238] Found /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1217 02:09:58.155635 1361730 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 02:09:58.155906 1361730 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/config.json ...
	I1217 02:09:58.175683 1361730 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:09:58.175706 1361730 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:09:58.175726 1361730 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:09:58.175764 1361730 start.go:360] acquireMachinesLock for pause-666844: {Name:mk669b34aeea697cf796906dfe79ca962658ddfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:09:58.175821 1361730 start.go:364] duration metric: took 37.726µs to acquireMachinesLock for "pause-666844"
	I1217 02:09:58.175848 1361730 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:09:58.175859 1361730 fix.go:54] fixHost starting: 
	I1217 02:09:58.176134 1361730 cli_runner.go:164] Run: docker container inspect pause-666844 --format={{.State.Status}}
	I1217 02:09:58.193502 1361730 fix.go:112] recreateIfNeeded on pause-666844: state=Running err=<nil>
	W1217 02:09:58.193539 1361730 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:09:58.196745 1361730 out.go:252] * Updating the running docker "pause-666844" container ...
	I1217 02:09:58.196786 1361730 machine.go:94] provisionDockerMachine start ...
	I1217 02:09:58.196867 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:58.214323 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:58.214646 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:58.214661 1361730 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:09:58.352240 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-666844
	
	I1217 02:09:58.352266 1361730 ubuntu.go:182] provisioning hostname "pause-666844"
	I1217 02:09:58.352328 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:58.369953 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:58.370299 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:58.370315 1361730 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-666844 && echo "pause-666844" | sudo tee /etc/hostname
	I1217 02:09:58.511521 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-666844
	
	I1217 02:09:58.511644 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:58.531543 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:58.531858 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:58.531880 1361730 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-666844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-666844/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-666844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:09:58.665426 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:09:58.665452 1361730 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1134739/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1134739/.minikube}
	I1217 02:09:58.665470 1361730 ubuntu.go:190] setting up certificates
	I1217 02:09:58.665480 1361730 provision.go:84] configureAuth start
	I1217 02:09:58.665543 1361730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-666844
	I1217 02:09:58.684007 1361730 provision.go:143] copyHostCerts
	I1217 02:09:58.684079 1361730 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem, removing ...
	I1217 02:09:58.684094 1361730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem
	I1217 02:09:58.684169 1361730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/key.pem (1675 bytes)
	I1217 02:09:58.684277 1361730 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem, removing ...
	I1217 02:09:58.684283 1361730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem
	I1217 02:09:58.684312 1361730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.pem (1082 bytes)
	I1217 02:09:58.684371 1361730 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem, removing ...
	I1217 02:09:58.684376 1361730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem
	I1217 02:09:58.684403 1361730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1134739/.minikube/cert.pem (1123 bytes)
	I1217 02:09:58.684573 1361730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem org=jenkins.pause-666844 san=[127.0.0.1 192.168.85.2 localhost minikube pause-666844]
	I1217 02:09:59.146710 1361730 provision.go:177] copyRemoteCerts
	I1217 02:09:59.146791 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:09:59.146832 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:59.166906 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:09:59.269322 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 02:09:59.287890 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 02:09:59.306568 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:09:59.325658 1361730 provision.go:87] duration metric: took 660.162493ms to configureAuth
	I1217 02:09:59.325686 1361730 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:09:59.325916 1361730 config.go:182] Loaded profile config "pause-666844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:09:59.326026 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:09:59.344881 1361730 main.go:143] libmachine: Using SSH client type: native
	I1217 02:09:59.345214 1361730 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 34148 <nil> <nil>}
	I1217 02:09:59.345235 1361730 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 02:10:04.737874 1361730 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 02:10:04.737917 1361730 machine.go:97] duration metric: took 6.541120092s to provisionDockerMachine
	I1217 02:10:04.737929 1361730 start.go:293] postStartSetup for "pause-666844" (driver="docker")
	I1217 02:10:04.737957 1361730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:10:04.738037 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:10:04.738094 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:04.760261 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:04.857137 1361730 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:10:04.861090 1361730 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:10:04.861120 1361730 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:10:04.861132 1361730 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/addons for local assets ...
	I1217 02:10:04.861196 1361730 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1134739/.minikube/files for local assets ...
	I1217 02:10:04.861315 1361730 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem -> 11365972.pem in /etc/ssl/certs
	I1217 02:10:04.861439 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:10:04.869828 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 02:10:04.889259 1361730 start.go:296] duration metric: took 151.297442ms for postStartSetup
	I1217 02:10:04.889341 1361730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:10:04.889388 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:04.907559 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:05.003311 1361730 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:10:05.011551 1361730 fix.go:56] duration metric: took 6.835681918s for fixHost
	I1217 02:10:05.011589 1361730 start.go:83] releasing machines lock for "pause-666844", held for 6.835751307s
	I1217 02:10:05.011676 1361730 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-666844
	I1217 02:10:05.043380 1361730 ssh_runner.go:195] Run: cat /version.json
	I1217 02:10:05.043435 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:05.043698 1361730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 02:10:05.043744 1361730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-666844
	I1217 02:10:05.065358 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:05.078245 1361730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34148 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/pause-666844/id_rsa Username:docker}
	I1217 02:10:05.279654 1361730 ssh_runner.go:195] Run: systemctl --version
	I1217 02:10:05.286588 1361730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 02:10:05.336335 1361730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:10:05.340952 1361730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:10:05.341030 1361730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:10:05.350244 1361730 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:10:05.350269 1361730 start.go:496] detecting cgroup driver to use...
	I1217 02:10:05.350301 1361730 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:10:05.350355 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:10:05.367802 1361730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:10:05.381924 1361730 docker.go:218] disabling cri-docker service (if available) ...
	I1217 02:10:05.381998 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 02:10:05.398409 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 02:10:05.412614 1361730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 02:10:05.561952 1361730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 02:10:05.697813 1361730 docker.go:234] disabling docker service ...
	I1217 02:10:05.697895 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 02:10:05.714211 1361730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 02:10:05.728048 1361730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 02:10:05.897831 1361730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 02:10:06.038839 1361730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:10:06.054079 1361730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:10:06.070237 1361730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 02:10:06.070402 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.081868 1361730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 02:10:06.081997 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.092388 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.102495 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.113108 1361730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:10:06.122344 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.132243 1361730 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.141341 1361730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 02:10:06.150770 1361730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:10:06.160049 1361730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:10:06.168786 1361730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:06.302108 1361730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 02:10:06.527519 1361730 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 02:10:06.527669 1361730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 02:10:06.532330 1361730 start.go:564] Will wait 60s for crictl version
	I1217 02:10:06.532494 1361730 ssh_runner.go:195] Run: which crictl
	I1217 02:10:06.536730 1361730 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:10:06.572068 1361730 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 02:10:06.572250 1361730 ssh_runner.go:195] Run: crio --version
	I1217 02:10:06.601522 1361730 ssh_runner.go:195] Run: crio --version
	I1217 02:10:06.636880 1361730 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1217 02:10:06.639936 1361730 cli_runner.go:164] Run: docker network inspect pause-666844 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 02:10:06.656119 1361730 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 02:10:06.660206 1361730 kubeadm.go:884] updating cluster {Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:10:06.660358 1361730 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 02:10:06.660411 1361730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:10:06.693119 1361730 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 02:10:06.693146 1361730 crio.go:433] Images already preloaded, skipping extraction
	I1217 02:10:06.693205 1361730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 02:10:06.720638 1361730 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 02:10:06.720662 1361730 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:10:06.720670 1361730 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1217 02:10:06.720777 1361730 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-666844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:10:06.720856 1361730 ssh_runner.go:195] Run: crio config
	I1217 02:10:06.787049 1361730 cni.go:84] Creating CNI manager for ""
	I1217 02:10:06.787122 1361730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 02:10:06.787158 1361730 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:10:06.787214 1361730 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-666844 NodeName:pause-666844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:10:06.787382 1361730 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-666844"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:10:06.787501 1361730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 02:10:06.795844 1361730 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:10:06.795964 1361730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:10:06.803797 1361730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1217 02:10:06.817075 1361730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 02:10:06.831844 1361730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 02:10:06.845499 1361730 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:10:06.849260 1361730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:06.987965 1361730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:10:07.001897 1361730 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844 for IP: 192.168.85.2
	I1217 02:10:07.001984 1361730 certs.go:195] generating shared ca certs ...
	I1217 02:10:07.002017 1361730 certs.go:227] acquiring lock for ca certs: {Name:mk79dbec824f655721f17a578dcd85ece499c9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:07.002230 1361730 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key
	I1217 02:10:07.002308 1361730 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key
	I1217 02:10:07.002346 1361730 certs.go:257] generating profile certs ...
	I1217 02:10:07.002486 1361730 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.key
	I1217 02:10:07.002622 1361730 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/apiserver.key.a9797934
	I1217 02:10:07.002699 1361730 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/proxy-client.key
	I1217 02:10:07.002852 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem (1338 bytes)
	W1217 02:10:07.002916 1361730 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597_empty.pem, impossibly tiny 0 bytes
	I1217 02:10:07.002941 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 02:10:07.003002 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/ca.pem (1082 bytes)
	I1217 02:10:07.003057 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/cert.pem (1123 bytes)
	I1217 02:10:07.003125 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/key.pem (1675 bytes)
	I1217 02:10:07.003264 1361730 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem (1708 bytes)
	I1217 02:10:07.004037 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:10:07.025194 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:10:07.044282 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:10:07.063133 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:10:07.082596 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 02:10:07.102325 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:10:07.123369 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:10:07.143113 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:10:07.162055 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/ssl/certs/11365972.pem --> /usr/share/ca-certificates/11365972.pem (1708 bytes)
	I1217 02:10:07.180849 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:10:07.199379 1361730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1134739/.minikube/certs/1136597.pem --> /usr/share/ca-certificates/1136597.pem (1338 bytes)
	I1217 02:10:07.218010 1361730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:10:07.231839 1361730 ssh_runner.go:195] Run: openssl version
	I1217 02:10:07.238442 1361730 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.246630 1361730 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:10:07.260208 1361730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.266620 1361730 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:29 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.266693 1361730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:10:07.309502 1361730 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:10:07.317884 1361730 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.326344 1361730 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1136597.pem /etc/ssl/certs/1136597.pem
	I1217 02:10:07.335596 1361730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.339978 1361730 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:41 /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.340056 1361730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1136597.pem
	I1217 02:10:07.393657 1361730 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:10:07.402741 1361730 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.410733 1361730 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11365972.pem /etc/ssl/certs/11365972.pem
	I1217 02:10:07.418810 1361730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.423165 1361730 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:41 /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.423296 1361730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11365972.pem
	I1217 02:10:07.466143 1361730 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:10:07.474073 1361730 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:10:07.477958 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:10:07.519488 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:10:07.560825 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:10:07.603304 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:10:07.644963 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:10:07.686630 1361730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:10:07.727673 1361730 kubeadm.go:401] StartCluster: {Name:pause-666844 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-666844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:10:07.727799 1361730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 02:10:07.727872 1361730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 02:10:07.760648 1361730 cri.go:89] found id: "22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	I1217 02:10:07.760669 1361730 cri.go:89] found id: "7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6"
	I1217 02:10:07.760673 1361730 cri.go:89] found id: "28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120"
	I1217 02:10:07.760677 1361730 cri.go:89] found id: "8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	I1217 02:10:07.760680 1361730 cri.go:89] found id: "86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e"
	I1217 02:10:07.760683 1361730 cri.go:89] found id: "d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d"
	I1217 02:10:07.760687 1361730 cri.go:89] found id: "b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539"
	I1217 02:10:07.760690 1361730 cri.go:89] found id: ""
	I1217 02:10:07.760740 1361730 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 02:10:07.773143 1361730 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T02:10:07Z" level=error msg="open /run/runc: no such file or directory"
	I1217 02:10:07.773226 1361730 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:10:07.781544 1361730 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:10:07.781565 1361730 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:10:07.781620 1361730 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:10:07.789118 1361730 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:10:07.790162 1361730 kubeconfig.go:125] found "pause-666844" server: "https://192.168.85.2:8443"
	I1217 02:10:07.790989 1361730 kapi.go:59] client config for pause-666844: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 02:10:07.791500 1361730 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 02:10:07.791519 1361730 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 02:10:07.791525 1361730 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 02:10:07.791533 1361730 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 02:10:07.791544 1361730 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 02:10:07.791836 1361730 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:10:07.801544 1361730 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 02:10:07.801586 1361730 kubeadm.go:602] duration metric: took 20.013811ms to restartPrimaryControlPlane
	I1217 02:10:07.801596 1361730 kubeadm.go:403] duration metric: took 73.933511ms to StartCluster
	I1217 02:10:07.801616 1361730 settings.go:142] acquiring lock: {Name:mk320c773a0b358190614bce0f3947b41700660a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:07.801693 1361730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 02:10:07.802621 1361730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/kubeconfig: {Name:mk45348e817fc1c8625c2f75acdbca863cda05b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:10:07.802867 1361730 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 02:10:07.803183 1361730 config.go:182] Loaded profile config "pause-666844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 02:10:07.803244 1361730 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:10:07.807506 1361730 out.go:179] * Verifying Kubernetes components...
	I1217 02:10:07.807546 1361730 out.go:179] * Enabled addons: 
	I1217 02:10:07.811325 1361730 addons.go:530] duration metric: took 8.072508ms for enable addons: enabled=[]
	I1217 02:10:07.811408 1361730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:10:08.132169 1361730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:10:08.176517 1361730 node_ready.go:35] waiting up to 6m0s for node "pause-666844" to be "Ready" ...
	I1217 02:10:12.432442 1361730 node_ready.go:49] node "pause-666844" is "Ready"
	I1217 02:10:12.432475 1361730 node_ready.go:38] duration metric: took 4.255926663s for node "pause-666844" to be "Ready" ...
	I1217 02:10:12.432489 1361730 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:10:12.432559 1361730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:12.449023 1361730 api_server.go:72] duration metric: took 4.646110339s to wait for apiserver process to appear ...
	I1217 02:10:12.449060 1361730 api_server.go:88] waiting for apiserver healthz status ...
	I1217 02:10:12.449080 1361730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 02:10:12.469901 1361730 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 02:10:12.469933 1361730 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 02:10:12.949619 1361730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 02:10:12.958759 1361730 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 02:10:12.958808 1361730 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 02:10:13.449212 1361730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 02:10:13.461397 1361730 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 02:10:13.463739 1361730 api_server.go:141] control plane version: v1.34.2
	I1217 02:10:13.463776 1361730 api_server.go:131] duration metric: took 1.014710424s to wait for apiserver health ...
	I1217 02:10:13.463786 1361730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 02:10:13.470811 1361730 system_pods.go:59] 7 kube-system pods found
	I1217 02:10:13.470850 1361730 system_pods.go:61] "coredns-66bc5c9577-gqldk" [6e2209e5-3adb-4599-9018-3a91a74eca37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:10:13.470862 1361730 system_pods.go:61] "etcd-pause-666844" [36db6b8d-2628-48ab-9216-34b2724d6d3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 02:10:13.470867 1361730 system_pods.go:61] "kindnet-vpl6h" [adc49a53-a6af-4834-b7ac-000e9c04c4eb] Running
	I1217 02:10:13.470882 1361730 system_pods.go:61] "kube-apiserver-pause-666844" [19918a3f-12a2-4d78-a98f-f89a92913c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:10:13.470895 1361730 system_pods.go:61] "kube-controller-manager-pause-666844" [55dd406c-002d-47f9-9181-185b093afd5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:10:13.470899 1361730 system_pods.go:61] "kube-proxy-ntp5b" [438c1e2b-7d5c-4d99-9143-b6f4169c2015] Running
	I1217 02:10:13.470905 1361730 system_pods.go:61] "kube-scheduler-pause-666844" [152b95ac-8cd4-487d-a1c2-3d99601960ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:10:13.470911 1361730 system_pods.go:74] duration metric: took 7.118551ms to wait for pod list to return data ...
	I1217 02:10:13.470924 1361730 default_sa.go:34] waiting for default service account to be created ...
	I1217 02:10:13.473063 1361730 default_sa.go:45] found service account: "default"
	I1217 02:10:13.473091 1361730 default_sa.go:55] duration metric: took 2.159968ms for default service account to be created ...
	I1217 02:10:13.473113 1361730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 02:10:13.475867 1361730 system_pods.go:86] 7 kube-system pods found
	I1217 02:10:13.475900 1361730 system_pods.go:89] "coredns-66bc5c9577-gqldk" [6e2209e5-3adb-4599-9018-3a91a74eca37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 02:10:13.475909 1361730 system_pods.go:89] "etcd-pause-666844" [36db6b8d-2628-48ab-9216-34b2724d6d3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 02:10:13.475924 1361730 system_pods.go:89] "kindnet-vpl6h" [adc49a53-a6af-4834-b7ac-000e9c04c4eb] Running
	I1217 02:10:13.475933 1361730 system_pods.go:89] "kube-apiserver-pause-666844" [19918a3f-12a2-4d78-a98f-f89a92913c31] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 02:10:13.475941 1361730 system_pods.go:89] "kube-controller-manager-pause-666844" [55dd406c-002d-47f9-9181-185b093afd5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 02:10:13.475948 1361730 system_pods.go:89] "kube-proxy-ntp5b" [438c1e2b-7d5c-4d99-9143-b6f4169c2015] Running
	I1217 02:10:13.475954 1361730 system_pods.go:89] "kube-scheduler-pause-666844" [152b95ac-8cd4-487d-a1c2-3d99601960ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 02:10:13.475967 1361730 system_pods.go:126] duration metric: took 2.844892ms to wait for k8s-apps to be running ...
	I1217 02:10:13.475976 1361730 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 02:10:13.476045 1361730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:10:13.490293 1361730 system_svc.go:56] duration metric: took 14.307347ms WaitForService to wait for kubelet
	I1217 02:10:13.490328 1361730 kubeadm.go:587] duration metric: took 5.687428814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:10:13.490344 1361730 node_conditions.go:102] verifying NodePressure condition ...
	I1217 02:10:13.498412 1361730 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1217 02:10:13.498447 1361730 node_conditions.go:123] node cpu capacity is 2
	I1217 02:10:13.498460 1361730 node_conditions.go:105] duration metric: took 8.110505ms to run NodePressure ...
	I1217 02:10:13.498474 1361730 start.go:242] waiting for startup goroutines ...
	I1217 02:10:13.498482 1361730 start.go:247] waiting for cluster config update ...
	I1217 02:10:13.498490 1361730 start.go:256] writing updated cluster config ...
	I1217 02:10:13.498801 1361730 ssh_runner.go:195] Run: rm -f paused
	I1217 02:10:13.502849 1361730 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 02:10:13.503557 1361730 kapi.go:59] client config for pause-666844: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/pause-666844/client.key", CAFile:"/home/jenkins/minikube-integration/22168-1134739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 02:10:13.567672 1361730 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqldk" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 02:10:15.575223 1361730 pod_ready.go:104] pod "coredns-66bc5c9577-gqldk" is not "Ready", error: <nil>
	W1217 02:10:18.074327 1361730 pod_ready.go:104] pod "coredns-66bc5c9577-gqldk" is not "Ready", error: <nil>
	I1217 02:10:20.085296 1361730 pod_ready.go:94] pod "coredns-66bc5c9577-gqldk" is "Ready"
	I1217 02:10:20.085325 1361730 pod_ready.go:86] duration metric: took 6.517624843s for pod "coredns-66bc5c9577-gqldk" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:20.091185 1361730 pod_ready.go:83] waiting for pod "etcd-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:20.105233 1361730 pod_ready.go:94] pod "etcd-pause-666844" is "Ready"
	I1217 02:10:20.105276 1361730 pod_ready.go:86] duration metric: took 14.057588ms for pod "etcd-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:20.190774 1361730 pod_ready.go:83] waiting for pod "kube-apiserver-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 02:10:22.196848 1361730 pod_ready.go:104] pod "kube-apiserver-pause-666844" is not "Ready", error: <nil>
	I1217 02:10:23.196454 1361730 pod_ready.go:94] pod "kube-apiserver-pause-666844" is "Ready"
	I1217 02:10:23.196483 1361730 pod_ready.go:86] duration metric: took 3.005679219s for pod "kube-apiserver-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.198749 1361730 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.203323 1361730 pod_ready.go:94] pod "kube-controller-manager-pause-666844" is "Ready"
	I1217 02:10:23.203350 1361730 pod_ready.go:86] duration metric: took 4.571236ms for pod "kube-controller-manager-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.205723 1361730 pod_ready.go:83] waiting for pod "kube-proxy-ntp5b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.271497 1361730 pod_ready.go:94] pod "kube-proxy-ntp5b" is "Ready"
	I1217 02:10:23.271527 1361730 pod_ready.go:86] duration metric: took 65.775334ms for pod "kube-proxy-ntp5b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.470706 1361730 pod_ready.go:83] waiting for pod "kube-scheduler-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.870754 1361730 pod_ready.go:94] pod "kube-scheduler-pause-666844" is "Ready"
	I1217 02:10:23.870784 1361730 pod_ready.go:86] duration metric: took 400.006426ms for pod "kube-scheduler-pause-666844" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 02:10:23.870797 1361730 pod_ready.go:40] duration metric: took 10.36786422s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 02:10:23.925798 1361730 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1217 02:10:23.930956 1361730 out.go:179] * Done! kubectl is now configured to use "pause-666844" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.161354807Z" level=info msg="Starting container: 87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca" id=70d79866-cc35-47e7-938e-5b1866e36c33 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.170970526Z" level=info msg="Starting container: 50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014" id=aa785bf9-cab7-40a3-9737-4278d8d54e41 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.172177005Z" level=info msg="Started container" PID=2380 containerID=87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca description=kube-system/etcd-pause-666844/etcd id=70d79866-cc35-47e7-938e-5b1866e36c33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b84534a4ad51ef27952b6ed1b91327e0159e6c36d672206b5dcd61f08ff9f906
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.17709998Z" level=info msg="Created container 7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22: kube-system/kube-proxy-ntp5b/kube-proxy" id=685c2169-2231-4146-8215-da16180b0b6d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.179381709Z" level=info msg="Starting container: 7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22" id=b404a05a-4eec-44f7-9e50-649ef47dd949 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.179802417Z" level=info msg="Started container" PID=2395 containerID=50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014 description=kube-system/kindnet-vpl6h/kindnet-cni id=aa785bf9-cab7-40a3-9737-4278d8d54e41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=323e783cf9239a72f6df7ab888c2419ef474de877c35e95e66cee7c4dd3da85a
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.182507144Z" level=info msg="Created container 8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20: kube-system/coredns-66bc5c9577-gqldk/coredns" id=a4ec8074-cffb-4473-b472-9c1732fd974c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.186789912Z" level=info msg="Starting container: 8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20" id=6c373d78-f486-4acd-a8ec-548f6c394b3e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.203431884Z" level=info msg="Started container" PID=2390 containerID=8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20 description=kube-system/coredns-66bc5c9577-gqldk/coredns id=6c373d78-f486-4acd-a8ec-548f6c394b3e name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe4358fd31072ef655959c267c24fdf9cad376458f24ea94a0af0f5759a0c779
	Dec 17 02:10:08 pause-666844 crio[2081]: time="2025-12-17T02:10:08.204580199Z" level=info msg="Started container" PID=2376 containerID=7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22 description=kube-system/kube-proxy-ntp5b/kube-proxy id=b404a05a-4eec-44f7-9e50-649ef47dd949 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a644de56145fc1eff2e26d5275a49a21ab9f70fc2388173dc65bfd4831643968
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.539132929Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.543208809Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.543433772Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.543527587Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.547468315Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.547667506Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.547764612Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.552262759Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.552485105Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.552574867Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.56672494Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.566901618Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.566940345Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.576373799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 02:10:18 pause-666844 crio[2081]: time="2025-12-17T02:10:18.576464381Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a8bac143739eb       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   f808e4259afae       kube-controller-manager-pause-666844   kube-system
	50e0d7ab5a6b2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   323e783cf9239       kindnet-vpl6h                          kube-system
	8f019167b6583       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   fe4358fd31072       coredns-66bc5c9577-gqldk               kube-system
	7cde0d1f85171       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   a644de56145fc       kube-proxy-ntp5b                       kube-system
	87ccbfd65324b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   b84534a4ad51e       etcd-pause-666844                      kube-system
	aa1c4d79fcd4b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   0ffd8976f7ebc       kube-apiserver-pause-666844            kube-system
	6f82e6d867dd8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   21 seconds ago       Running             kube-scheduler            1                   9a056da516d10       kube-scheduler-pause-666844            kube-system
	22741cbb922b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   fe4358fd31072       coredns-66bc5c9577-gqldk               kube-system
	7edaa15b6668a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   a644de56145fc       kube-proxy-ntp5b                       kube-system
	28a59b52ed064       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   323e783cf9239       kindnet-vpl6h                          kube-system
	8e601778d0498       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   f808e4259afae       kube-controller-manager-pause-666844   kube-system
	86aa82d55ac5e       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   0ffd8976f7ebc       kube-apiserver-pause-666844            kube-system
	d13ff1a752ca5       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   9a056da516d10       kube-scheduler-pause-666844            kube-system
	b49736361e655       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   b84534a4ad51e       etcd-pause-666844                      kube-system
	
	
	==> coredns [22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47210 - 8848 "HINFO IN 1666532456141977913.4593343254278862335. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02337641s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8f019167b6583149378e11e78511590f1e6db4509a6521ca803da15d7810bb20] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40299 - 17999 "HINFO IN 6297234418200880835.510038069702975500. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00382622s
	
	
	==> describe nodes <==
	Name:               pause-666844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-666844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=pause-666844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T02_09_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 02:09:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-666844
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 02:10:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 02:09:55 +0000   Wed, 17 Dec 2025 02:09:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-666844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dc957e113b26e583da13082693ddabc
	  System UUID:                55884fe1-5afc-4bf9-9b85-732808c50909
	  Boot ID:                    3c3577c9-c937-4d49-921a-86b4945852ac
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gqldk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-666844                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-vpl6h                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-666844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-666844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-ntp5b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-666844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-666844 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-666844 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-666844 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 88s                kubelet          Starting kubelet.
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-666844 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-666844 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-666844 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-666844 event: Registered Node pause-666844 in Controller
	  Normal   NodeReady                34s                kubelet          Node pause-666844 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-666844 event: Registered Node pause-666844 in Controller
	
	
	==> dmesg <==
	[Dec17 01:26] overlayfs: idmapped layers are currently not supported
	[  +3.428919] overlayfs: idmapped layers are currently not supported
	[ +34.914517] overlayfs: idmapped layers are currently not supported
	[Dec17 01:27] overlayfs: idmapped layers are currently not supported
	[Dec17 01:28] overlayfs: idmapped layers are currently not supported
	[  +3.208371] overlayfs: idmapped layers are currently not supported
	[Dec17 01:36] overlayfs: idmapped layers are currently not supported
	[Dec17 01:38] overlayfs: idmapped layers are currently not supported
	[Dec17 01:43] overlayfs: idmapped layers are currently not supported
	[ +37.335374] overlayfs: idmapped layers are currently not supported
	[Dec17 01:45] overlayfs: idmapped layers are currently not supported
	[Dec17 01:46] overlayfs: idmapped layers are currently not supported
	[Dec17 01:47] overlayfs: idmapped layers are currently not supported
	[Dec17 01:48] overlayfs: idmapped layers are currently not supported
	[Dec17 01:49] overlayfs: idmapped layers are currently not supported
	[  +7.899083] overlayfs: idmapped layers are currently not supported
	[Dec17 01:50] overlayfs: idmapped layers are currently not supported
	[ +25.041678] overlayfs: idmapped layers are currently not supported
	[Dec17 01:51] overlayfs: idmapped layers are currently not supported
	[ +26.339183] overlayfs: idmapped layers are currently not supported
	[Dec17 01:53] overlayfs: idmapped layers are currently not supported
	[Dec17 01:54] overlayfs: idmapped layers are currently not supported
	[Dec17 01:56] overlayfs: idmapped layers are currently not supported
	[Dec17 01:58] overlayfs: idmapped layers are currently not supported
	[Dec17 02:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [87ccbfd65324b17b3fa906c39bf13afbb6384b0062f3f38392dd453d4b2f01ca] <==
	{"level":"warn","ts":"2025-12-17T02:10:10.674243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.696372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.735007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.751486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.776899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.794483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.871211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.883914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.907789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.948885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.968472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:10.992955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.016603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.039423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.097664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.127974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.159904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.184731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.212532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.225037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.242238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.272138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.289723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.301584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:10:11.408569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60490","server-name":"","error":"EOF"}
	
	
	==> etcd [b49736361e6558f91997e69b825a5deafd43b536dc4ba8e888be613cf3d74539] <==
	{"level":"warn","ts":"2025-12-17T02:09:04.969040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:04.992920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.025599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.062703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.085844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.102232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T02:09:05.153042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T02:09:59.508660Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T02:09:59.508707Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-666844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-17T02:09:59.508805Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T02:09:59.649043Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T02:09:59.650543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.650597Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650643Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T02:09:59.650652Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.650661Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-17T02:09:59.650671Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650708Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T02:09:59.650720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T02:09:59.650727Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.654097Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-17T02:09:59.654190Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T02:09:59.654224Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T02:09:59.654232Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-666844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 02:10:30 up  7:52,  0 user,  load average: 1.53, 1.43, 1.66
	Linux pause-666844 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [28a59b52ed06421c1f6b18c5bba0b3b704f7ca96b31e724a8c397cf7145f3120] <==
	I1217 02:09:14.626669       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 02:09:14.627086       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 02:09:14.627246       1 main.go:148] setting mtu 1500 for CNI 
	I1217 02:09:14.627258       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 02:09:14.627271       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T02:09:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 02:09:14.831805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 02:09:14.831898       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 02:09:14.831931       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 02:09:14.832747       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1217 02:09:44.832479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1217 02:09:44.832504       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1217 02:09:44.832600       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1217 02:09:44.832685       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1217 02:09:46.432147       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 02:09:46.432247       1 metrics.go:72] Registering metrics
	I1217 02:09:46.432398       1 controller.go:711] "Syncing nftables rules"
	I1217 02:09:54.831447       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 02:09:54.831515       1 main.go:301] handling current node
	
	
	==> kindnet [50e0d7ab5a6b21a3ae960f73115327d0813e612b01e5cbf74b10b5bf86354014] <==
	I1217 02:10:08.336462       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 02:10:08.339752       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 02:10:08.339966       1 main.go:148] setting mtu 1500 for CNI 
	I1217 02:10:08.340008       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 02:10:08.340051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T02:10:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 02:10:08.545131       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 02:10:08.545232       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 02:10:08.545268       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 02:10:08.545732       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 02:10:12.446058       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 02:10:12.446166       1 metrics.go:72] Registering metrics
	I1217 02:10:12.446277       1 controller.go:711] "Syncing nftables rules"
	I1217 02:10:18.538661       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 02:10:18.538798       1 main.go:301] handling current node
	I1217 02:10:28.540534       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 02:10:28.540592       1 main.go:301] handling current node
	
	
	==> kube-apiserver [86aa82d55ac5e4d58880161d20599e09534c5d63c1518d210497cf7469026b6e] <==
	W1217 02:09:59.530575       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.530642       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533736       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533829       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533876       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533932       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.533987       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534043       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534098       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534154       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534210       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.534262       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535300       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535372       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535414       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535462       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535516       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535562       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.535614       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536071       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536129       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536179       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536561       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536634       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1217 02:09:59.536699       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [aa1c4d79fcd4b4864131c4466f74735487343c2ff68ba612a99354d2de6fb07e] <==
	I1217 02:10:12.440181       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 02:10:12.450049       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 02:10:12.450163       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 02:10:12.452829       1 aggregator.go:171] initial CRD sync complete...
	I1217 02:10:12.452921       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 02:10:12.452952       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 02:10:12.452981       1 cache.go:39] Caches are synced for autoregister controller
	I1217 02:10:12.450468       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1217 02:10:12.453207       1 policy_source.go:240] refreshing policies
	I1217 02:10:12.460177       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 02:10:12.460215       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 02:10:12.460637       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 02:10:12.460770       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 02:10:12.461147       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 02:10:12.462237       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 02:10:12.467475       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1217 02:10:12.489982       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 02:10:12.551208       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 02:10:12.551337       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 02:10:13.142470       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 02:10:13.435391       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 02:10:14.974230       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 02:10:15.039914       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 02:10:15.174260       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 02:10:15.226852       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7] <==
	I1217 02:09:12.877866       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 02:09:12.877894       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 02:09:12.878069       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 02:09:12.881783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 02:09:12.882973       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1217 02:09:12.887885       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-666844" podCIDRs=["10.244.0.0/24"]
	I1217 02:09:12.893398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 02:09:12.896492       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 02:09:12.915024       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:09:12.916235       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 02:09:12.919711       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 02:09:12.919828       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 02:09:12.920783       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 02:09:12.920900       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 02:09:12.920913       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 02:09:12.921141       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 02:09:12.920921       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 02:09:12.923398       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 02:09:12.924937       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:09:12.924967       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 02:09:12.924983       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 02:09:12.926108       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 02:09:12.936551       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 02:09:12.938332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 02:09:57.860132       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a8bac143739eb280ee1fdb70440e2a377b17a757f9fa62235bfdd0a218a5b197] <==
	I1217 02:10:14.821634       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 02:10:14.821643       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 02:10:14.821651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 02:10:14.828679       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 02:10:14.828934       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 02:10:14.830042       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:10:14.830068       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 02:10:14.830074       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 02:10:14.833632       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 02:10:14.836508       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 02:10:14.853758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 02:10:14.867618       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 02:10:14.867686       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 02:10:14.867617       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 02:10:14.867732       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 02:10:14.867829       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 02:10:14.868005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 02:10:14.868105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-666844"
	I1217 02:10:14.868196       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 02:10:14.878580       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1217 02:10:14.879665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 02:10:14.883987       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 02:10:14.887143       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 02:10:14.899578       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 02:10:14.916065       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [7cde0d1f851710fd1e44d69da2f51fe054e0729b4bff6251dd4a1c34f027cf22] <==
	I1217 02:10:11.245034       1 server_linux.go:53] "Using iptables proxy"
	I1217 02:10:12.061389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 02:10:12.564493       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 02:10:12.564537       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 02:10:12.564604       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 02:10:12.808589       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 02:10:12.808649       1 server_linux.go:132] "Using iptables Proxier"
	I1217 02:10:12.824011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 02:10:12.824407       1 server.go:527] "Version info" version="v1.34.2"
	I1217 02:10:12.834928       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 02:10:12.836534       1 config.go:200] "Starting service config controller"
	I1217 02:10:12.836555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 02:10:12.836573       1 config.go:106] "Starting endpoint slice config controller"
	I1217 02:10:12.836584       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 02:10:12.836596       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 02:10:12.836601       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 02:10:12.837363       1 config.go:309] "Starting node config controller"
	I1217 02:10:12.837381       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 02:10:12.837388       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 02:10:12.937678       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 02:10:12.937778       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 02:10:12.937809       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [7edaa15b6668afc1752e2efd804e727500b41ee949d9b7cc0e17ebb50dd63fe6] <==
	I1217 02:09:14.642957       1 server_linux.go:53] "Using iptables proxy"
	I1217 02:09:14.819221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 02:09:14.920040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 02:09:14.920152       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 02:09:14.920252       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 02:09:14.942615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 02:09:14.942666       1 server_linux.go:132] "Using iptables Proxier"
	I1217 02:09:14.946779       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 02:09:14.947110       1 server.go:527] "Version info" version="v1.34.2"
	I1217 02:09:14.947135       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 02:09:14.948842       1 config.go:200] "Starting service config controller"
	I1217 02:09:14.948936       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 02:09:14.949004       1 config.go:106] "Starting endpoint slice config controller"
	I1217 02:09:14.949041       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 02:09:14.949080       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 02:09:14.949116       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 02:09:14.949824       1 config.go:309] "Starting node config controller"
	I1217 02:09:14.949888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 02:09:14.949922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 02:09:15.049849       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 02:09:15.049901       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 02:09:15.049971       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6f82e6d867dd820ed5113c9de478465d128ecf972ec654f51ad7247894213a18] <==
	I1217 02:10:10.648306       1 serving.go:386] Generated self-signed cert in-memory
	I1217 02:10:13.189272       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1217 02:10:13.189438       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 02:10:13.197532       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 02:10:13.197595       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 02:10:13.197636       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:10:13.197656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:10:13.197673       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 02:10:13.197691       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 02:10:13.200834       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 02:10:13.201000       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 02:10:13.297891       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 02:10:13.298020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 02:10:13.298143       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d13ff1a752ca5fdfc70539adf0f257248cf86cef33502e2f76e14735e142577d] <==
	E1217 02:09:05.889785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 02:09:05.889935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 02:09:05.890018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 02:09:05.890123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 02:09:06.704618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 02:09:06.749336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 02:09:06.779746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 02:09:06.794892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 02:09:06.812001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 02:09:06.859404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 02:09:06.894512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 02:09:07.014682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 02:09:07.025902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 02:09:07.035683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 02:09:07.075715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 02:09:07.106633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1217 02:09:07.141282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 02:09:07.148266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1217 02:09:09.672038       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:09:59.505376       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 02:09:59.505406       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 02:09:59.505431       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 02:09:59.505456       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 02:09:59.505656       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 02:09:59.505672       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.896191    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.901175    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe6fc7a1d6b736f49e590712a8cb629c" pod="kube-system/etcd-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: I1217 02:10:07.949040    1327 scope.go:117] "RemoveContainer" containerID="22741cbb922b74b8c1061965d2c0ac0d7da922ccf982be5a525f1b89ac6e8eb3"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.949649    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ntp5b\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="438c1e2b-7d5c-4d99-9143-b6f4169c2015" pod="kube-system/kube-proxy-ntp5b"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.949842    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gqldk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6e2209e5-3adb-4599-9018-3a91a74eca37" pod="kube-system/coredns-66bc5c9577-gqldk"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950003    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950158    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe6fc7a1d6b736f49e590712a8cb629c" pod="kube-system/etcd-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950320    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2f7dd8b69b48534f8c8e38a462218518" pod="kube-system/kube-scheduler-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.950481    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-vpl6h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="adc49a53-a6af-4834-b7ac-000e9c04c4eb" pod="kube-system/kindnet-vpl6h"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: I1217 02:10:07.995372    1327 scope.go:117] "RemoveContainer" containerID="8e601778d0498a47b171ce8ceb77b580ffa3b8542cf06a4990b39b37191275e7"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996007    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-vpl6h\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="adc49a53-a6af-4834-b7ac-000e9c04c4eb" pod="kube-system/kindnet-vpl6h"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996186    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ntp5b\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="438c1e2b-7d5c-4d99-9143-b6f4169c2015" pod="kube-system/kube-proxy-ntp5b"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996343    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-gqldk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6e2209e5-3adb-4599-9018-3a91a74eca37" pod="kube-system/coredns-66bc5c9577-gqldk"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.996690    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.997013    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="fe6fc7a1d6b736f49e590712a8cb629c" pod="kube-system/etcd-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.997193    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="07672ffe58e59d0679acf1c2e5f2c41e" pod="kube-system/kube-controller-manager-pause-666844"
	Dec 17 02:10:07 pause-666844 kubelet[1327]: E1217 02:10:07.997344    1327 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-666844\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2f7dd8b69b48534f8c8e38a462218518" pod="kube-system/kube-scheduler-pause-666844"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.413623    1327 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-666844\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.414237    1327 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-666844\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.414690    1327 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-666844\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 17 02:10:12 pause-666844 kubelet[1327]: E1217 02:10:12.415105    1327 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-666844\" is forbidden: User \"system:node:pause-666844\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-666844' and this object" podUID="26108c1eee6bba7f4881b98d2fcf157e" pod="kube-system/kube-apiserver-pause-666844"
	Dec 17 02:10:18 pause-666844 kubelet[1327]: W1217 02:10:18.916117    1327 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 17 02:10:24 pause-666844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 02:10:24 pause-666844 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 02:10:24 pause-666844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-666844 -n pause-666844
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-666844 -n pause-666844: exit status 2 (368.471904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-666844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (7200.076s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 02:28:33.510941 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/old-k8s-version-659372/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (17m42s)
		TestStartStop (20m4s)
		TestStartStop/group/newest-cni (8m31s)
		TestStartStop/group/newest-cni/serial (8m31s)
		TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (9s)
		TestStartStop/group/no-preload (10m8s)
		TestStartStop/group/no-preload/serial (10m8s)
		TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1m30s)

                                                
                                                
goroutine 4264 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40004c2c40, 0x40007bbbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400072a060, {0x534c680, 0x2c, 0x2c}, {0x40007bbd08?, 0x125774?, 0x53750c0?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x4000820be0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x4000820be0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 169 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x40004ee230}, 0x400009e740, 0x400140ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x40004ee230}, 0x0?, 0x400009e740, 0x400009e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x40004ee230?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000460300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 183
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3626 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0x4000495090)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001501c00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001501c00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001501c00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001501c00, 0x4001a8a500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3561
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1000 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001acd500, 0x4001abb730)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 999
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 168 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40008ef990, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40008ef980)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013f64e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4004f702a0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x40004ee230?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x40004ee230}, 0x400151af38, {0x369e540, 0x4004f665a0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e540?, 0x4004f665a0?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f0c460, 0x3b9aca00, 0x0, 0x1, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 183
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 182 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x40001bc080?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 164
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4263 [select]:
os/exec.(*Cmd).watchCtx(0x4000460480, 0x40014ca850)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 4260
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 663 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff5d807a00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400045e180?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400045e180)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400045e180)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40008efe40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40008efe40)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000104a00, {0x36d4020, 0x40008efe40})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000104a00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 661
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 846 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4000735190, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000735180)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001a062a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40004f2e70?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x40004ee230?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x40004ee230}, 0x40007c5f38, {0x369e540, 0x4000445c20}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42f0?, {0x369e540?, 0x4000445c20?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40007ebd40, 0x3b9aca00, 0x0, 0x1, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3515 [chan receive, 10 minutes]:
testing.(*T).Run(0x40015008c0, {0x296eb91?, 0x0?}, 0x4001a8a580)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x40015008c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x40015008c0, 0x4000456380)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3511
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 183 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013f64e0, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 164
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 170 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 169
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3932 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x40001bc080?}, 0x4001578300?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3942
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4261 [IO wait]:
internal/poll.runtime_pollWait(0xffff5d38d400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001cd02a0?, 0x400149250b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001cd02a0, {0x400149250b, 0x2f5, 0x2f5})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400026e140, {0x400149250b?, 0x40000a1548?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400176c570, {0x369c918, 0x4000872100})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cb00, 0x400176c570}, {0x369c918, 0x4000872100}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400026e140?, {0x369cb00, 0x400176c570})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400026e140, {0x369cb00, 0x400176c570})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cb00, 0x400176c570}, {0x369c998, 0x400026e140}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40007db880?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4260
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 3933 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013f7260, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3942
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3251 [chan receive, 17 minutes]:
testing.(*T).Run(0x40017aa8c0, {0x296d71f?, 0x18c7fa79fce9?}, 0x40009f2270)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40017aa8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40017aa8c0, 0x339bb10)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 847 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x40004ee230}, 0x4001557f40, 0x4001557f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x40004ee230}, 0x88?, 0x4001557f40, 0x4001557f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x40004ee230?}, 0x4001c00f00?, 0x4001b7cb40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400142cf00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 2092 [chan send, 78 minutes]:
os/exec.(*Cmd).watchCtx(0x4000461800, 0x40014cb730)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1455
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1528 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1527
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3511 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40015001c0, 0x339bd40)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3304
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3562 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0x4000495090)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40017aae00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40017aae00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40017aae00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40017aae00, 0x4001a8a180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3561
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4260 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x40007d1a58, 0x4, 0x4001d1e360, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40007d1bb8?, 0x1929a0?, 0xffffd172e19c?, 0x0?, 0x4001a8a680?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4000870900)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40007d1b88?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4000460480)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4000460480)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40007db880, 0x4000460480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.validateEnableAddonWhileActive({0x36e6638, 0x40004f2850}, 0x40007db880, {0x40016785b8, 0x11}, {0x2978705, 0xa}, {0x6942154b?, 0x4001518f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:203 +0x12c
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40007db880?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40007db880, 0x4001a8a600)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4205
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1116 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001c43080, 0x4001c41110)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 775
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3808 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3807
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 852 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x40001bc080?}, 0x400011dc80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 853 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001a062a0, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1526 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x4001cf1850, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001cf1840)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400155e5a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4004f71500?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x40004ee230?}, 0x40007eeea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x40004ee230}, 0x40013eaf38, {0x369e540, 0x4001e152f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40007eefa8?, {0x369e540?, 0x4001e152f0?}, 0x10?, 0x4001acc000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001dc4c40, 0x3b9aca00, 0x0, 0x1, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1517
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3805 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x40001bc080?}, 0x4001578300?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3785
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3953 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x40004ee230}, 0x4001cc0740, 0x40000d7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x40004ee230}, 0x0?, 0x4001cc0740, 0x4001cc0788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x40004ee230?}, 0x36e6638?, 0x40015e2ee0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400011d500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3933
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 848 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 847
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3806 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4000870f50, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000870f40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001d00240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002b39d0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x40004ee230?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x40004ee230}, 0x40007c9f38, {0x369e540, 0x40007e8e70}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42f0?, {0x369e540?, 0x40007e8e70?}, 0x70?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001dc4fa0, 0x3b9aca00, 0x0, 0x1, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3789
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1279 [IO wait, 107 minutes]:
internal/poll.runtime_pollWait(0xffff5d807200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001a8a080?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001a8a080)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001a8a080)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001cf0bc0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001cf0bc0)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001a52100, {0x36d4020, 0x4001cf0bc0})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001a52100)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1277
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3692 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0x4000495090)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016ea380)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016ea380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016ea380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016ea380, 0x400024fa80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3561
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1516 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff680, {{0x36f42f0, 0x40001bc080?}, 0x4001550700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1515
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1527 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x40004ee230}, 0x40020acf40, 0x40013e8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x40004ee230}, 0x67?, 0x40020acf40, 0x40020acf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x40004ee230?}, 0x0?, 0x40020acf50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42f0?, 0x40001bc080?, 0x4001550700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1517
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1517 [chan receive, 80 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400155e5a0, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1515
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3513 [chan receive, 8 minutes]:
testing.(*T).Run(0x4001500540, {0x296eb91?, 0x0?}, 0x400024e780)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001500540)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001500540, 0x40004562c0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3511
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4247 [syscall, 1 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x13, 0x40007d3a58, 0x4, 0x40000e3200, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40007d3bb8?, 0x1929a0?, 0xffffd172e19c?, 0x0?, 0x4001a8a300?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40008708c0)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40007d3b88?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4000460300)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4000460300)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40007db500, 0x4000460300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.validateEnableAddonWhileActive({0x36e6638, 0x40004dea10}, 0x40007db500, {0x4001b7e378, 0x11}, {0x297870f, 0xa}, {0x694214fb?, 0x40000d2f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:203 +0x12c
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40007db500?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40007db500, 0x4001a8a280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4116
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3954 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3953
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1667 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x400011d500, 0x40004ee850)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1666
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3625 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0x4000495090)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001501880)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001501880)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001501880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001501880, 0x4001a8a480)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3561
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4205 [chan receive]:
testing.(*T).Run(0x40007dae00, {0x299a20f?, 0x40000006ee?}, 0x4001a8a600)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40007dae00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40007dae00, 0x400024e780)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3513
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3624 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0x4000495090)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001501500)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001501500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001501500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001501500, 0x4001a8a400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3561
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4262 [IO wait]:
internal/poll.runtime_pollWait(0xffff5d38d000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001cd03c0?, 0x40003c5600?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001cd03c0, {0x40003c5600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400026e180, {0x40003c5600?, 0x400009e548?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400176c630, {0x369c918, 0x4000872118})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cb00, 0x400176c630}, {0x369c918, 0x4000872118}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400026e180?, {0x369cb00, 0x400176c630})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400026e180, {0x369cb00, 0x400176c630})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cb00, 0x400176c630}, {0x369c998, 0x400026e180}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001579e00?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4260
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 1057 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001c42300, 0x4001c40310)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1024
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4116 [chan receive, 1 minutes]:
testing.(*T).Run(0x40007db340, {0x299a20f?, 0x40000006ee?}, 0x4001a8a280)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40007db340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40007db340, 0x4001a8a580)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3515
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3691 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0x4000495090)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40004c2e00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40004c2e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40004c2e00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40004c2e00, 0x400024fa00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3561
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3623 [chan receive, 17 minutes]:
testing.(*testState).waitParallel(0x4000495090)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001501180)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001501180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001501180)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001501180, 0x4001a8a380)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3561
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1199 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x4001acb680)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1197
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 1200 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x4001acb680)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1197
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 3807 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69d0, 0x40004ee230}, 0x40007f5f40, 0x40000d8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69d0, 0x40004ee230}, 0x84?, 0x40007f5f40, 0x40007f5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69d0?, 0x40004ee230?}, 0x0?, 0x40015e22a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001a51200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3789
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1686 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4000460300, 0x4001e289a0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1685
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3789 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001d00240, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3785
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3304 [chan receive, 20 minutes]:
testing.(*T).Run(0x40017ab6c0, {0x296d71f?, 0x400140df58?}, 0x339bd40)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x40017ab6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x40017ab6c0, 0x339bb58)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4248 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0xffff5d38d600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001cd0240?, 0x4001492d0b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001cd0240, {0x4001492d0b, 0x2f5, 0x2f5})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400026e130, {0x4001492d0b?, 0x40020af548?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400176c5a0, {0x369c918, 0x400026e1d0})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cb00, 0x400176c5a0}, {0x369c918, 0x400026e1d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400026e130?, {0x369cb00, 0x400176c5a0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400026e130, {0x369cb00, 0x400176c5a0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cb00, 0x400176c5a0}, {0x369c998, 0x400026e130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40007db500?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4247
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4250 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0x4000460300, 0x4001c40930)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 4247
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3936 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x4000735310, 0x12)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000735300)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013f7260)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40000831f0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69d0?, 0x40004ee230?}, 0x400009e6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69d0, 0x40004ee230}, 0x4001516f38, {0x369e540, 0x4001d96720}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e540?, 0x4001d96720?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001cf8d40, 0x3b9aca00, 0x0, 0x1, 0x40004ee230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3933
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3561 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40017aa700, 0x40009f2270)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3251
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4249 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0xffff5d38ca00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001cd0300?, 0x4000470600?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001cd0300, {0x4000470600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400026e170, {0x4000470600?, 0x40020ab548?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400176c600, {0x369c918, 0x400026e1e0})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cb00, 0x400176c600}, {0x369c918, 0x400026e1e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400026e170?, {0x369cb00, 0x400176c600})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400026e170, {0x369cb00, 0x400176c600})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cb00, 0x400176c600}, {0x369c998, 0x400026e170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40016ea700?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4247
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                    

Test pass (234/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.1
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.12
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 7.02
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 5.23
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.63
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 166
40 TestAddons/serial/GCPAuth/Namespaces 0.2
41 TestAddons/serial/GCPAuth/FakeCredentials 9.98
57 TestAddons/StoppedEnableDisable 12.52
58 TestCertOptions 40.11
59 TestCertExpiration 333.26
61 TestForceSystemdFlag 43.62
62 TestForceSystemdEnv 42.05
67 TestErrorSpam/setup 29.3
68 TestErrorSpam/start 0.84
69 TestErrorSpam/status 1.13
70 TestErrorSpam/pause 6.51
71 TestErrorSpam/unpause 5.4
72 TestErrorSpam/stop 1.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 79.54
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 119.84
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
84 TestFunctional/serial/CacheCmd/cache/add_local 1.27
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.05
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 38.31
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.52
95 TestFunctional/serial/LogsFileCmd 2.01
96 TestFunctional/serial/InvalidService 4.12
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 10.88
100 TestFunctional/parallel/DryRun 0.55
101 TestFunctional/parallel/InternationalLanguage 0.28
102 TestFunctional/parallel/StatusCmd 1.82
106 TestFunctional/parallel/ServiceCmdConnect 8.6
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 19.64
110 TestFunctional/parallel/SSHCmd 0.73
111 TestFunctional/parallel/CpCmd 2.43
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 2.13
118 TestFunctional/parallel/NodeLabels 0.11
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.85
122 TestFunctional/parallel/License 0.36
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.55
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
138 TestFunctional/parallel/MountCmd/any-port 7.02
139 TestFunctional/parallel/ServiceCmd/List 0.66
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
141 TestFunctional/parallel/MountCmd/specific-port 2.61
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
143 TestFunctional/parallel/ServiceCmd/Format 0.38
144 TestFunctional/parallel/ServiceCmd/URL 0.5
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.77
146 TestFunctional/parallel/Version/short 0.09
147 TestFunctional/parallel/Version/components 1.16
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
153 TestFunctional/parallel/ImageCommands/Setup 0.64
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.08
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.68
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.42
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.13
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.81
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.94
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.99
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.35
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.47
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.22
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.56
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.61
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.68
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.86
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.78
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.25
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.2
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.8
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.07
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.72
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.4
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.18
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.73
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.03
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.44
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.44
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 199.15
265 TestMultiControlPlane/serial/DeployApp 6.78
266 TestMultiControlPlane/serial/PingHostFromPods 1.51
267 TestMultiControlPlane/serial/AddWorkerNode 59.33
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
270 TestMultiControlPlane/serial/CopyFile 20.58
271 TestMultiControlPlane/serial/StopSecondaryNode 12.87
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 137.31
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.18
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
278 TestMultiControlPlane/serial/StopCluster 36.22
287 TestJSONOutput/start/Command 78.97
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.97
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.26
312 TestKicCustomNetwork/create_custom_network 39.06
313 TestKicCustomNetwork/use_default_bridge_network 36.57
314 TestKicExistingNetwork 34.07
315 TestKicCustomSubnet 35.19
316 TestKicStaticIP 35.5
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 78.05
321 TestMountStart/serial/StartWithMountFirst 8.96
322 TestMountStart/serial/VerifyMountFirst 0.3
323 TestMountStart/serial/StartWithMountSecond 8.87
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.26
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.74
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 138.91
333 TestMultiNode/serial/DeployApp2Nodes 5.35
334 TestMultiNode/serial/PingHostFrom2Pods 0.99
335 TestMultiNode/serial/AddNode 57.64
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.5
339 TestMultiNode/serial/StopNode 2.41
340 TestMultiNode/serial/StartAfterStop 8.08
341 TestMultiNode/serial/RestartKeepsNodes 72.45
342 TestMultiNode/serial/DeleteNode 5.7
343 TestMultiNode/serial/StopMultiNode 24.08
344 TestMultiNode/serial/RestartMultiNode 54.19
345 TestMultiNode/serial/ValidateNameConflict 34.18
350 TestPreload 150.6
352 TestScheduledStopUnix 110.47
355 TestInsufficientStorage 13.07
356 TestRunningBinaryUpgrade 298.9
359 TestMissingContainerUpgrade 115.67
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 46.3
363 TestNoKubernetes/serial/StartWithStopK8s 19.87
364 TestNoKubernetes/serial/Start 7.99
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
367 TestNoKubernetes/serial/ProfileList 0.69
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 9.85
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.55
371 TestStoppedBinaryUpgrade/Setup 1.99
372 TestStoppedBinaryUpgrade/Upgrade 321.47
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.8
382 TestPause/serial/Start 84.43
383 TestPause/serial/SecondStartNoReconfiguration 26.09
x
+
TestDownloadOnly/v1.28.0/json-events (6.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-852471 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-852471 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.094910863s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 00:28:43.066396 1136597 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 00:28:43.066475 1136597 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-852471
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-852471: exit status 85 (114.438622ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-852471 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-852471 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:28:37
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:28:37.020120 1136602 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:28:37.020360 1136602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:37.020397 1136602 out.go:374] Setting ErrFile to fd 2...
	I1217 00:28:37.020443 1136602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:37.021004 1136602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	W1217 00:28:37.021319 1136602 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22168-1134739/.minikube/config/config.json: open /home/jenkins/minikube-integration/22168-1134739/.minikube/config/config.json: no such file or directory
	I1217 00:28:37.022078 1136602 out.go:368] Setting JSON to true
	I1217 00:28:37.023233 1136602 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22267,"bootTime":1765909050,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:28:37.023389 1136602 start.go:143] virtualization:  
	I1217 00:28:37.030607 1136602 out.go:99] [download-only-852471] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1217 00:28:37.030814 1136602 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 00:28:37.030934 1136602 notify.go:221] Checking for updates...
	I1217 00:28:37.036293 1136602 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:28:37.039546 1136602 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:28:37.042767 1136602 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:28:37.045952 1136602 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:28:37.049156 1136602 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 00:28:37.055223 1136602 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:28:37.055591 1136602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:28:37.076768 1136602 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:28:37.076948 1136602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:37.148289 1136602 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-17 00:28:37.138163008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:37.148402 1136602 docker.go:319] overlay module found
	I1217 00:28:37.151513 1136602 out.go:99] Using the docker driver based on user configuration
	I1217 00:28:37.151559 1136602 start.go:309] selected driver: docker
	I1217 00:28:37.151567 1136602 start.go:927] validating driver "docker" against <nil>
	I1217 00:28:37.151680 1136602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:37.215032 1136602 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-17 00:28:37.205688351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:37.215185 1136602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:28:37.215480 1136602 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 00:28:37.215632 1136602 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:28:37.218816 1136602 out.go:171] Using Docker driver with root privileges
	I1217 00:28:37.221934 1136602 cni.go:84] Creating CNI manager for ""
	I1217 00:28:37.222015 1136602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:28:37.222029 1136602 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:28:37.222115 1136602 start.go:353] cluster config:
	{Name:download-only-852471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-852471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:28:37.225258 1136602 out.go:99] Starting "download-only-852471" primary control-plane node in "download-only-852471" cluster
	I1217 00:28:37.225281 1136602 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:28:37.228163 1136602 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:28:37.228209 1136602 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:28:37.228375 1136602 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:28:37.244732 1136602 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:28:37.244992 1136602 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:28:37.245148 1136602 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:28:37.279566 1136602 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:28:37.279599 1136602 cache.go:65] Caching tarball of preloaded images
	I1217 00:28:37.279787 1136602 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:28:37.283247 1136602 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 00:28:37.283278 1136602 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1217 00:28:37.365449 1136602 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1217 00:28:37.365583 1136602 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-852471 host does not exist
	  To start a cluster, run: "minikube start -p download-only-852471"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-852471
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (7.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-514568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-514568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.022768423s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (7.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1217 00:28:50.558646 1136597 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1217 00:28:50.558681 1136597 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-514568
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-514568: exit status 85 (89.198173ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-852471 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-852471 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-852471                                                                                                                                                   │ download-only-852471 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ -o=json --download-only -p download-only-514568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-514568 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:28:43
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:28:43.585616 1136804 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:28:43.585732 1136804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:43.585743 1136804 out.go:374] Setting ErrFile to fd 2...
	I1217 00:28:43.585748 1136804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:43.586014 1136804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:28:43.586500 1136804 out.go:368] Setting JSON to true
	I1217 00:28:43.587315 1136804 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22274,"bootTime":1765909050,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:28:43.587390 1136804 start.go:143] virtualization:  
	I1217 00:28:43.590351 1136804 out.go:99] [download-only-514568] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:28:43.590594 1136804 notify.go:221] Checking for updates...
	I1217 00:28:43.593307 1136804 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:28:43.596150 1136804 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:28:43.598899 1136804 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:28:43.601654 1136804 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:28:43.604445 1136804 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 00:28:43.609910 1136804 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:28:43.610226 1136804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:28:43.634251 1136804 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:28:43.634384 1136804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:43.694014 1136804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-17 00:28:43.684256932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:43.694132 1136804 docker.go:319] overlay module found
	I1217 00:28:43.697033 1136804 out.go:99] Using the docker driver based on user configuration
	I1217 00:28:43.697082 1136804 start.go:309] selected driver: docker
	I1217 00:28:43.697090 1136804 start.go:927] validating driver "docker" against <nil>
	I1217 00:28:43.697212 1136804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:43.762855 1136804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-12-17 00:28:43.752889244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:43.763017 1136804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:28:43.763291 1136804 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 00:28:43.763453 1136804 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:28:43.766478 1136804 out.go:171] Using Docker driver with root privileges
	I1217 00:28:43.769348 1136804 cni.go:84] Creating CNI manager for ""
	I1217 00:28:43.769426 1136804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:28:43.769441 1136804 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:28:43.769525 1136804 start.go:353] cluster config:
	{Name:download-only-514568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-514568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:28:43.772396 1136804 out.go:99] Starting "download-only-514568" primary control-plane node in "download-only-514568" cluster
	I1217 00:28:43.772434 1136804 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:28:43.775337 1136804 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:28:43.775390 1136804 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:28:43.775578 1136804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:28:43.792096 1136804 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:28:43.792236 1136804 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:28:43.792258 1136804 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 00:28:43.792263 1136804 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 00:28:43.792274 1136804 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 00:28:43.840304 1136804 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1217 00:28:43.840333 1136804 cache.go:65] Caching tarball of preloaded images
	I1217 00:28:43.840526 1136804 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:28:43.843606 1136804 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1217 00:28:43.843637 1136804 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1217 00:28:43.932312 1136804 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1217 00:28:43.932366 1136804 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-514568 host does not exist
	  To start a cluster, run: "minikube start -p download-only-514568"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-514568
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (5.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-633590 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-633590 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.227194377s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (5.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1217 00:28:56.240184 1136597 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1217 00:28:56.240218 1136597 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-633590
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-633590: exit status 85 (84.246824ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-852471 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-852471 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-852471                                                                                                                                                          │ download-only-852471 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ -o=json --download-only -p download-only-514568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-514568 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ delete  │ -p download-only-514568                                                                                                                                                          │ download-only-514568 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ start   │ -o=json --download-only -p download-only-633590 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-633590 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:28:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:28:51.060014 1137010 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:28:51.060175 1137010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:51.060188 1137010 out.go:374] Setting ErrFile to fd 2...
	I1217 00:28:51.060194 1137010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:28:51.060489 1137010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:28:51.060925 1137010 out.go:368] Setting JSON to true
	I1217 00:28:51.061814 1137010 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22281,"bootTime":1765909050,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:28:51.061891 1137010 start.go:143] virtualization:  
	I1217 00:28:51.065377 1137010 out.go:99] [download-only-633590] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:28:51.065656 1137010 notify.go:221] Checking for updates...
	I1217 00:28:51.068579 1137010 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:28:51.072073 1137010 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:28:51.074988 1137010 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:28:51.078050 1137010 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:28:51.081127 1137010 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1217 00:28:51.086870 1137010 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:28:51.087135 1137010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:28:51.120018 1137010 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:28:51.120185 1137010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:51.186701 1137010 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:28:51.1765549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:51.186811 1137010 docker.go:319] overlay module found
	I1217 00:28:51.189774 1137010 out.go:99] Using the docker driver based on user configuration
	I1217 00:28:51.189819 1137010 start.go:309] selected driver: docker
	I1217 00:28:51.189826 1137010 start.go:927] validating driver "docker" against <nil>
	I1217 00:28:51.189927 1137010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:28:51.253864 1137010 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-17 00:28:51.24517067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:28:51.254020 1137010 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:28:51.254280 1137010 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1217 00:28:51.254439 1137010 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:28:51.257495 1137010 out.go:171] Using Docker driver with root privileges
	I1217 00:28:51.260339 1137010 cni.go:84] Creating CNI manager for ""
	I1217 00:28:51.260407 1137010 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 00:28:51.260585 1137010 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 00:28:51.260693 1137010 start.go:353] cluster config:
	{Name:download-only-633590 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-633590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:28:51.263601 1137010 out.go:99] Starting "download-only-633590" primary control-plane node in "download-only-633590" cluster
	I1217 00:28:51.263618 1137010 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 00:28:51.266367 1137010 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:28:51.266403 1137010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:28:51.266594 1137010 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:28:51.282654 1137010 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:28:51.282806 1137010 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:28:51.282827 1137010 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 00:28:51.282839 1137010 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 00:28:51.282847 1137010 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 00:28:51.324778 1137010 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:28:51.324804 1137010 cache.go:65] Caching tarball of preloaded images
	I1217 00:28:51.325002 1137010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:28:51.328095 1137010 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1217 00:28:51.328116 1137010 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1217 00:28:51.410856 1137010 preload.go:295] Got checksum from GCS API "e7da2fb676059c00535073e4a61150f1"
	I1217 00:28:51.410908 1137010 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e7da2fb676059c00535073e4a61150f1 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1217 00:28:55.571694 1137010 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:28:55.572083 1137010 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/download-only-633590/config.json ...
	I1217 00:28:55.572118 1137010 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/download-only-633590/config.json: {Name:mk8e9311e6e35c0743e69a773af57d9f49d041ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:28:55.572313 1137010 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:28:55.572513 1137010 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-633590 host does not exist
	  To start a cluster, run: "minikube start -p download-only-633590"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-633590
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 00:28:57.588781 1136597 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-272608 --alsologtostderr --binary-mirror http://127.0.0.1:43199 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-272608" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-272608
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-219291
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-219291: exit status 85 (66.96548ms)

                                                
                                                
-- stdout --
	* Profile "addons-219291" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-219291"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-219291
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-219291: exit status 85 (80.756528ms)

                                                
                                                
-- stdout --
	* Profile "addons-219291" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-219291"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (166s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-219291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-219291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m46.000897585s)
--- PASS: TestAddons/Setup (166.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-219291 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-219291 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.98s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-219291 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-219291 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [75e14858-acc7-478e-b6f4-1ead1bced578] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [75e14858-acc7-478e-b6f4-1ead1bced578] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003709765s
addons_test.go:696: (dbg) Run:  kubectl --context addons-219291 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-219291 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-219291 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-219291 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-219291
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-219291: (12.242618384s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-219291
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-219291
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-219291
--- PASS: TestAddons/StoppedEnableDisable (12.52s)

                                                
                                    
x
+
TestCertOptions (40.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-034794 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-034794 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.239475617s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-034794 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-034794 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-034794 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-034794" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-034794
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-034794: (2.135756511s)
--- PASS: TestCertOptions (40.11s)

                                                
                                    
x
+
TestCertExpiration (333.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-489230 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1217 02:11:45.354065 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-489230 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.480058146s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-489230 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-489230 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m48.705890388s)
helpers_test.go:176: Cleaning up "cert-expiration-489230" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-489230
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-489230: (3.072614716s)
--- PASS: TestCertExpiration (333.26s)

                                                
                                    
x
+
TestForceSystemdFlag (43.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-485146 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-485146 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.481902799s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-485146 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-485146" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-485146
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-485146: (4.707013937s)
--- PASS: TestForceSystemdFlag (43.62s)

                                                
                                    
x
+
TestForceSystemdEnv (42.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-444320 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-444320 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.252167907s)
helpers_test.go:176: Cleaning up "force-systemd-env-444320" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-444320
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-444320: (2.793169986s)
--- PASS: TestForceSystemdEnv (42.05s)

                                                
                                    
x
+
TestErrorSpam/setup (29.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-180700 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-180700 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-180700 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-180700 --driver=docker  --container-runtime=crio: (29.300162066s)
--- PASS: TestErrorSpam/setup (29.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause: exit status 80 (1.738299617s)

                                                
                                                
-- stdout --
	* Pausing node nospam-180700 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:35:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause: exit status 80 (2.438072198s)

                                                
                                                
-- stdout --
	* Pausing node nospam-180700 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:35:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause: exit status 80 (2.329887194s)

                                                
                                                
-- stdout --
	* Pausing node nospam-180700 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:35:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause: exit status 80 (1.847011791s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-180700 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:35:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause: exit status 80 (1.647898559s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-180700 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause: exit status 80 (1.903273822s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-180700 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T00:35:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.40s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 stop: (1.315560962s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-180700 --log_dir /tmp/nospam-180700 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-099267 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1217 00:36:45.361533 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:45.368131 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:45.379484 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:45.400944 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:45.442429 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:45.523867 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:45.685580 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:46.006938 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:46.648959 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:47.930662 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:50.492577 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:36:55.614981 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:37:05.856359 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-099267 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.539025871s)
--- PASS: TestFunctional/serial/StartWithProxy (79.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (119.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 00:37:12.452767 1136597 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-099267 --alsologtostderr -v=8
E1217 00:37:26.338208 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:38:07.299650 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-099267 --alsologtostderr -v=8: (1m59.837155061s)
functional_test.go:678: soft start took 1m59.841174558s for "functional-099267" cluster.
I1217 00:39:12.290245 1136597 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (119.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-099267 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 cache add registry.k8s.io/pause:3.1: (1.246171135s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 cache add registry.k8s.io/pause:3.3: (1.181624559s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 cache add registry.k8s.io/pause:latest: (1.128912794s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-099267 /tmp/TestFunctionalserialCacheCmdcacheadd_local2168023019/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cache add minikube-local-cache-test:functional-099267
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cache delete minikube-local-cache-test:functional-099267
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-099267
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.537357ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 kubectl -- --context functional-099267 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-099267 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-099267 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:39:29.223386 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-099267 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.309672812s)
functional_test.go:776: restart took 38.30979793s for "functional-099267" cluster.
I1217 00:39:58.314959 1136597 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (38.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-099267 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 logs: (1.516278078s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 logs --file /tmp/TestFunctionalserialLogsFileCmd2007925528/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 logs --file /tmp/TestFunctionalserialLogsFileCmd2007925528/001/logs.txt: (2.010116101s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-099267 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-099267
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-099267: exit status 115 (406.235285ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31566 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-099267 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 config get cpus: exit status 14 (80.448616ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 config get cpus: exit status 14 (66.135287ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-099267 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-099267 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1162589: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-099267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-099267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (217.371074ms)

                                                
                                                
-- stdout --
	* [functional-099267] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:40:40.341129 1162067 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:40:40.341300 1162067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:40:40.341314 1162067 out.go:374] Setting ErrFile to fd 2...
	I1217 00:40:40.341320 1162067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:40:40.341631 1162067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:40:40.342011 1162067 out.go:368] Setting JSON to false
	I1217 00:40:40.343091 1162067 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22991,"bootTime":1765909050,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:40:40.343166 1162067 start.go:143] virtualization:  
	I1217 00:40:40.346322 1162067 out.go:179] * [functional-099267] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 00:40:40.350061 1162067 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:40:40.350235 1162067 notify.go:221] Checking for updates...
	I1217 00:40:40.355978 1162067 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:40:40.358711 1162067 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:40:40.361585 1162067 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:40:40.364523 1162067 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:40:40.367455 1162067 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:40:40.370734 1162067 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:40:40.371371 1162067 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:40:40.393908 1162067 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:40:40.394035 1162067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:40:40.480642 1162067 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 00:40:40.470100228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:40:40.480751 1162067 docker.go:319] overlay module found
	I1217 00:40:40.487566 1162067 out.go:179] * Using the docker driver based on existing profile
	I1217 00:40:40.491952 1162067 start.go:309] selected driver: docker
	I1217 00:40:40.491996 1162067 start.go:927] validating driver "docker" against &{Name:functional-099267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-099267 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:40:40.492114 1162067 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:40:40.497945 1162067 out.go:203] 
	W1217 00:40:40.501849 1162067 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:40:40.505689 1162067 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-099267 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-099267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-099267 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (276.992644ms)

                                                
                                                
-- stdout --
	* [functional-099267] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:40:40.131485 1161973 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:40:40.132871 1161973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:40:40.132895 1161973 out.go:374] Setting ErrFile to fd 2...
	I1217 00:40:40.132902 1161973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:40:40.134069 1161973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 00:40:40.134552 1161973 out.go:368] Setting JSON to false
	I1217 00:40:40.135515 1161973 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22991,"bootTime":1765909050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 00:40:40.135594 1161973 start.go:143] virtualization:  
	I1217 00:40:40.140100 1161973 out.go:179] * [functional-099267] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 00:40:40.143096 1161973 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:40:40.143153 1161973 notify.go:221] Checking for updates...
	I1217 00:40:40.146107 1161973 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:40:40.149061 1161973 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 00:40:40.152292 1161973 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 00:40:40.155138 1161973 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 00:40:40.158154 1161973 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:40:40.161685 1161973 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:40:40.162440 1161973 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:40:40.198231 1161973 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 00:40:40.198372 1161973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:40:40.270237 1161973 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-17 00:40:40.260795055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 00:40:40.270342 1161973 docker.go:319] overlay module found
	I1217 00:40:40.275255 1161973 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:40:40.278303 1161973 start.go:309] selected driver: docker
	I1217 00:40:40.278338 1161973 start.go:927] validating driver "docker" against &{Name:functional-099267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-099267 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:40:40.278458 1161973 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:40:40.282118 1161973 out.go:203] 
	W1217 00:40:40.285072 1161973 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:40:40.287864 1161973 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-099267 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-099267 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-zw94v" [fa076904-fa6c-4f02-a816-f0410d93889d] Pending
helpers_test.go:353: "hello-node-connect-7d85dfc575-zw94v" [fa076904-fa6c-4f02-a816-f0410d93889d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-zw94v" [fa076904-fa6c-4f02-a816-f0410d93889d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003768262s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31807
functional_test.go:1680: http://192.168.49.2:31807: success! body:
Request served by hello-node-connect-7d85dfc575-zw94v

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31807
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [d496e990-8771-468d-8bdf-acffa46f5841] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004322103s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-099267 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-099267 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-099267 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-099267 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e4590770-7b96-421d-95d4-d18ae26c75a9] Pending
helpers_test.go:353: "sp-pod" [e4590770-7b96-421d-95d4-d18ae26c75a9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.005270273s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-099267 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-099267 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-099267 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6089b523-f165-48ff-aa29-da1f37047b92] Pending
helpers_test.go:353: "sp-pod" [6089b523-f165-48ff-aa29-da1f37047b92] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003491931s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-099267 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh -n functional-099267 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cp functional-099267:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd781398679/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh -n functional-099267 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh -n functional-099267 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1136597/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo cat /etc/test/nested/copy/1136597/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1136597.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo cat /etc/ssl/certs/1136597.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1136597.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo cat /usr/share/ca-certificates/1136597.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11365972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo cat /etc/ssl/certs/11365972.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11365972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo cat /usr/share/ca-certificates/11365972.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-099267 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh "sudo systemctl is-active docker": exit status 1 (417.763219ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh "sudo systemctl is-active containerd": exit status 1 (429.819579ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-099267 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-099267 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-099267 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 1158991: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-099267 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-099267 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-099267 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [c04306ff-d0c3-4823-a4a1-44946aa23834] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [c04306ff-d0c3-4823-a4a1-44946aa23834] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003213803s
I1217 00:40:18.458124 1136597 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-099267 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.120.4 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-099267 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-099267 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-099267 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-vzk45" [75b0cdb6-df92-4c85-9aba-3b407dcaa6fb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-vzk45" [75b0cdb6-df92-4c85-9aba-3b407dcaa6fb] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003800835s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "380.615256ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "59.45298ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "354.974887ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "61.635377ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdany-port4293884624/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765932029466444630" to /tmp/TestFunctionalparallelMountCmdany-port4293884624/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765932029466444630" to /tmp/TestFunctionalparallelMountCmdany-port4293884624/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765932029466444630" to /tmp/TestFunctionalparallelMountCmdany-port4293884624/001/test-1765932029466444630
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.901669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:40:29.839468 1136597 retry.go:31] will retry after 414.473546ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:40 test-1765932029466444630
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh cat /mount-9p/test-1765932029466444630
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-099267 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [05ce839b-ba71-4b80-bfb1-4a84b00693be] Pending
helpers_test.go:353: "busybox-mount" [05ce839b-ba71-4b80-bfb1-4a84b00693be] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [05ce839b-ba71-4b80-bfb1-4a84b00693be] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [05ce839b-ba71-4b80-bfb1-4a84b00693be] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003744738s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-099267 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdany-port4293884624/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 service list -o json
functional_test.go:1504: Took "617.603785ms" to run "out/minikube-linux-arm64 -p functional-099267 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdspecific-port2648187308/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (465.454899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:40:36.955284 1136597 retry.go:31] will retry after 716.796192ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdspecific-port2648187308/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh "sudo umount -f /mount-9p": exit status 1 (385.344682ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-099267 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdspecific-port2648187308/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31748
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31748
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1101492810/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1101492810/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1101492810/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T" /mount1: exit status 1 (937.316547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:40:40.046160 1136597 retry.go:31] will retry after 563.656807ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-099267 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1101492810/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1101492810/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-099267 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1101492810/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 version -o=json --components: (1.163323068s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-099267 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-099267
localhost/kicbase/echo-server:functional-099267
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-099267 image ls --format short --alsologtostderr:
I1217 00:40:54.132707 1164042 out.go:360] Setting OutFile to fd 1 ...
I1217 00:40:54.132852 1164042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.132898 1164042 out.go:374] Setting ErrFile to fd 2...
I1217 00:40:54.132905 1164042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.133211 1164042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 00:40:54.133975 1164042 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.134137 1164042 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.134695 1164042 cli_runner.go:164] Run: docker container inspect functional-099267 --format={{.State.Status}}
I1217 00:40:54.153194 1164042 ssh_runner.go:195] Run: systemctl --version
I1217 00:40:54.153274 1164042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-099267
I1217 00:40:54.173153 1164042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-099267/id_rsa Username:docker}
I1217 00:40:54.285090 1164042 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-099267 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-099267  │ ce2d2cda2d858 │ 4.79MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/minikube-local-cache-test     │ functional-099267  │ 715206a349d8b │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-099267 image ls --format table --alsologtostderr:
I1217 00:40:55.012881 1164283 out.go:360] Setting OutFile to fd 1 ...
I1217 00:40:55.013129 1164283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:55.013179 1164283 out.go:374] Setting ErrFile to fd 2...
I1217 00:40:55.013202 1164283 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:55.013802 1164283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 00:40:55.036936 1164283 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:55.037120 1164283 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:55.037816 1164283 cli_runner.go:164] Run: docker container inspect functional-099267 --format={{.State.Status}}
I1217 00:40:55.065199 1164283 ssh_runner.go:195] Run: systemctl --version
I1217 00:40:55.065275 1164283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-099267
I1217 00:40:55.097754 1164283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-099267/id_rsa Username:docker}
I1217 00:40:55.215563 1164283 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-099267 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigest
s":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47
ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-099267"],"size":"4789170"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/
kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783
"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"715206a349d8b31a004fd41fd5c78471606af061fd173cd96b8de5a1473fd0e1","repoDigests":["localhost/minikube-local-cache-test@sha256:316c382b77c13c05f55eef517beeb47341d622888bcbafa7e3b3b6314bae8169"],"repoTags":["localhost/minikube-local-cache-test:functional-099267"],"size":"3330"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"13878
4d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2
723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-099267 image ls --format json --alsologtostderr:
I1217 00:40:54.733367 1164213 out.go:360] Setting OutFile to fd 1 ...
I1217 00:40:54.733574 1164213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.733595 1164213 out.go:374] Setting ErrFile to fd 2...
I1217 00:40:54.733629 1164213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.734392 1164213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 00:40:54.735443 1164213 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.735655 1164213 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.736481 1164213 cli_runner.go:164] Run: docker container inspect functional-099267 --format={{.State.Status}}
I1217 00:40:54.754411 1164213 ssh_runner.go:195] Run: systemctl --version
I1217 00:40:54.754465 1164213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-099267
I1217 00:40:54.780497 1164213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-099267/id_rsa Username:docker}
I1217 00:40:54.884776 1164213 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-099267 image ls --format yaml --alsologtostderr:
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-099267
size: "4789170"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 715206a349d8b31a004fd41fd5c78471606af061fd173cd96b8de5a1473fd0e1
repoDigests:
- localhost/minikube-local-cache-test@sha256:316c382b77c13c05f55eef517beeb47341d622888bcbafa7e3b3b6314bae8169
repoTags:
- localhost/minikube-local-cache-test:functional-099267
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-099267 image ls --format yaml --alsologtostderr:
I1217 00:40:54.435633 1164129 out.go:360] Setting OutFile to fd 1 ...
I1217 00:40:54.436016 1164129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.436025 1164129 out.go:374] Setting ErrFile to fd 2...
I1217 00:40:54.436031 1164129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.436324 1164129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 00:40:54.436995 1164129 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.437103 1164129 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.437622 1164129 cli_runner.go:164] Run: docker container inspect functional-099267 --format={{.State.Status}}
I1217 00:40:54.473356 1164129 ssh_runner.go:195] Run: systemctl --version
I1217 00:40:54.473451 1164129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-099267
I1217 00:40:54.499139 1164129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-099267/id_rsa Username:docker}
I1217 00:40:54.601352 1164129 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-099267 ssh pgrep buildkitd: exit status 1 (339.166195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr: (3.38103625s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fe6cddcd54a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-099267
--> 71c51a5b120
Successfully tagged localhost/my-image:functional-099267
71c51a5b120d5050f19449b58379aac301ab2ae394fc19642ee210aeaa426528
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-099267 image build -t localhost/my-image:functional-099267 testdata/build --alsologtostderr:
I1217 00:40:54.612953 1164187 out.go:360] Setting OutFile to fd 1 ...
I1217 00:40:54.614766 1164187 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.614817 1164187 out.go:374] Setting ErrFile to fd 2...
I1217 00:40:54.614839 1164187 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:40:54.615157 1164187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 00:40:54.615879 1164187 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.617720 1164187 config.go:182] Loaded profile config "functional-099267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:40:54.618346 1164187 cli_runner.go:164] Run: docker container inspect functional-099267 --format={{.State.Status}}
I1217 00:40:54.641012 1164187 ssh_runner.go:195] Run: systemctl --version
I1217 00:40:54.641066 1164187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-099267
I1217 00:40:54.666839 1164187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-099267/id_rsa Username:docker}
I1217 00:40:54.776398 1164187 build_images.go:162] Building image from path: /tmp/build.781815288.tar
I1217 00:40:54.776498 1164187 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:40:54.787829 1164187 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.781815288.tar
I1217 00:40:54.792477 1164187 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.781815288.tar: stat -c "%s %y" /var/lib/minikube/build/build.781815288.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.781815288.tar': No such file or directory
I1217 00:40:54.792512 1164187 ssh_runner.go:362] scp /tmp/build.781815288.tar --> /var/lib/minikube/build/build.781815288.tar (3072 bytes)
I1217 00:40:54.820824 1164187 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.781815288
I1217 00:40:54.831806 1164187 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.781815288 -xf /var/lib/minikube/build/build.781815288.tar
I1217 00:40:54.842168 1164187 crio.go:315] Building image: /var/lib/minikube/build/build.781815288
I1217 00:40:54.842250 1164187 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-099267 /var/lib/minikube/build/build.781815288 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1217 00:40:57.918928 1164187 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-099267 /var/lib/minikube/build/build.781815288 --cgroup-manager=cgroupfs: (3.076646993s)
I1217 00:40:57.919003 1164187 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.781815288
I1217 00:40:57.927402 1164187 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.781815288.tar
I1217 00:40:57.935747 1164187 build_images.go:218] Built localhost/my-image:functional-099267 from /tmp/build.781815288.tar
I1217 00:40:57.935780 1164187 build_images.go:134] succeeded building to: functional-099267
I1217 00:40:57.935785 1164187 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-099267
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image load --daemon kicbase/echo-server:functional-099267 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-099267 image load --daemon kicbase/echo-server:functional-099267 --alsologtostderr: (1.086606017s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image load --daemon kicbase/echo-server:functional-099267 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image save kicbase/echo-server:functional-099267 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image rm kicbase/echo-server:functional-099267 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image ls
2025/12/17 00:40:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-099267
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 image save --daemon kicbase/echo-server:functional-099267 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-099267
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-099267 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-099267
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-099267
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-099267
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-1134739/.minikube/files/etc/test/nested/copy/1136597/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 cache add registry.k8s.io/pause:3.1: (1.184773889s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 cache add registry.k8s.io/pause:3.3: (1.114515371s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 cache add registry.k8s.io/pause:latest: (1.119393957s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2154988622/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cache add minikube-local-cache-test:functional-389537
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cache delete minikube-local-cache-test:functional-389537
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-389537
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.666093ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1906138389/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 config get cpus: exit status 14 (46.775735ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 config get cpus: exit status 14 (59.312626ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (191.381807ms)

                                                
                                                
-- stdout --
	* [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:10:15.241044 1194969 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:10:15.241230 1194969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:15.241242 1194969 out.go:374] Setting ErrFile to fd 2...
	I1217 01:10:15.241247 1194969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:15.241533 1194969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:10:15.241937 1194969 out.go:368] Setting JSON to false
	I1217 01:10:15.242801 1194969 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24766,"bootTime":1765909050,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:10:15.242875 1194969 start.go:143] virtualization:  
	I1217 01:10:15.246340 1194969 out.go:179] * [functional-389537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1217 01:10:15.250222 1194969 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:10:15.250388 1194969 notify.go:221] Checking for updates...
	I1217 01:10:15.256010 1194969 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:10:15.258900 1194969 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:10:15.261742 1194969 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:10:15.264699 1194969 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:10:15.267681 1194969 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:10:15.271168 1194969 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 01:10:15.271786 1194969 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:10:15.294199 1194969 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:10:15.294343 1194969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:10:15.360520 1194969 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:10:15.350273302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:10:15.360641 1194969 docker.go:319] overlay module found
	I1217 01:10:15.363889 1194969 out.go:179] * Using the docker driver based on existing profile
	I1217 01:10:15.366725 1194969 start.go:309] selected driver: docker
	I1217 01:10:15.366744 1194969 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:10:15.366851 1194969 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:10:15.370550 1194969 out.go:203] 
	W1217 01:10:15.373552 1194969 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 01:10:15.376336 1194969 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389537 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (215.455786ms)

                                                
                                                
-- stdout --
	* [functional-389537] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:10:18.049509 1195605 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:10:18.049733 1195605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:18.049764 1195605 out.go:374] Setting ErrFile to fd 2...
	I1217 01:10:18.049784 1195605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:18.050297 1195605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:10:18.050762 1195605 out.go:368] Setting JSON to false
	I1217 01:10:18.051720 1195605 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24768,"bootTime":1765909050,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1217 01:10:18.051822 1195605 start.go:143] virtualization:  
	I1217 01:10:18.056956 1195605 out.go:179] * [functional-389537] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1217 01:10:18.059947 1195605 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:10:18.060044 1195605 notify.go:221] Checking for updates...
	I1217 01:10:18.065731 1195605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:10:18.068771 1195605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	I1217 01:10:18.071703 1195605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	I1217 01:10:18.074678 1195605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1217 01:10:18.077595 1195605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:10:18.081023 1195605 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 01:10:18.081621 1195605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:10:18.120581 1195605 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1217 01:10:18.120797 1195605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:10:18.188706 1195605 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-17 01:10:18.178636988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:10:18.188812 1195605 docker.go:319] overlay module found
	I1217 01:10:18.191969 1195605 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 01:10:18.194813 1195605 start.go:309] selected driver: docker
	I1217 01:10:18.194851 1195605 start.go:927] validating driver "docker" against &{Name:functional-389537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389537 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:10:18.194963 1195605 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:10:18.198653 1195605 out.go:203] 
	W1217 01:10:18.201645 1195605 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 01:10:18.204565 1195605 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh -n functional-389537 "sudo cat /home/docker/cp-test.txt"
I1217 01:08:17.599617 1136597 retry.go:31] will retry after 1.765839693s: Temporary Error: Get "http://10.99.120.4": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cp functional-389537:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3377460682/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh -n functional-389537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh -n functional-389537 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1136597/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo cat /etc/test/nested/copy/1136597/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1136597.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo cat /etc/ssl/certs/1136597.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1136597.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo cat /usr/share/ca-certificates/1136597.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11365972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo cat /etc/ssl/certs/11365972.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11365972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo cat /usr/share/ca-certificates/11365972.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh "sudo systemctl is-active docker": exit status 1 (481.099005ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh "sudo systemctl is-active containerd": exit status 1 (375.57841ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389537 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-389537
localhost/kicbase/echo-server:functional-389537
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389537 image ls --format short --alsologtostderr:
I1217 01:10:20.526077 1196134 out.go:360] Setting OutFile to fd 1 ...
I1217 01:10:20.526278 1196134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:20.526306 1196134 out.go:374] Setting ErrFile to fd 2...
I1217 01:10:20.526325 1196134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:20.526587 1196134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:10:20.527283 1196134 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:20.527476 1196134 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:20.528060 1196134 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:10:20.545466 1196134 ssh_runner.go:195] Run: systemctl --version
I1217 01:10:20.545536 1196134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:10:20.561938 1196134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
I1217 01:10:20.660279 1196134 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389537 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ localhost/kicbase/echo-server           │ functional-389537  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-389537  │ 715206a349d8b │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-389537  │ b934229f329e0 │ 1.64MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389537 image ls --format table --alsologtostderr:
I1217 01:10:24.993940 1196623 out.go:360] Setting OutFile to fd 1 ...
I1217 01:10:24.994064 1196623 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:24.994072 1196623 out.go:374] Setting ErrFile to fd 2...
I1217 01:10:24.994078 1196623 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:24.994338 1196623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:10:24.994965 1196623 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:24.995085 1196623 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:24.995606 1196623 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:10:25.022462 1196623 ssh_runner.go:195] Run: systemctl --version
I1217 01:10:25.022530 1196623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:10:25.042851 1196623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
I1217 01:10:25.139515 1196623 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389537 image ls --format json --alsologtostderr:
[{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"715206a349d8b31a004fd41fd5c78471606af061fd173cd96b8de5a1473fd0e1","repoDigests":["localhost/minikube-local-cache-test@sha256:316c382b77c13c05f55eef517beeb47341d622888bcbafa7e3b3b6314bae8169"],"repoTags":["localhost/minikube-local-cache-test:functional-389537"],"size":"3330"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"si
ze":"60857170"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f1
35bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407
f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"717ec6843080e661a4b5d66752d312f1529d1edbad320f0fc5ed1c018ccb3fb4","repoDigests":["docker.io/library/02ced55aad2ae0ed651b3c4bfc411aa3a4587daf33b9f6f2ac0eee8c73c2e957-tmp@sha256:ac635e69870f2eb651f947184a82922fea6a38a0dd44509bf75363ef8b8636cd"],"repoTags":[],"size":"1638179"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-389537"],"size":"4788229"},{"id":"b934229f329e0c3978c1bc05b2378647b607f22ef2337f97a9ea9a89c95baa00","repoDigests":["localhost/my-image@sha256:73db62cd02f5b5269a560052a57131
b4c02d9c259a1cd7e86911aceaab02819f"],"repoTags":["localhost/my-image:functional-389537"],"size":"1640791"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a
4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389537 image ls --format json --alsologtostderr:
I1217 01:10:24.772795 1196586 out.go:360] Setting OutFile to fd 1 ...
I1217 01:10:24.772978 1196586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:24.773011 1196586 out.go:374] Setting ErrFile to fd 2...
I1217 01:10:24.773037 1196586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:24.773421 1196586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:10:24.774458 1196586 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:24.774665 1196586 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:24.775707 1196586 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:10:24.793588 1196586 ssh_runner.go:195] Run: systemctl --version
I1217 01:10:24.793645 1196586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:10:24.811217 1196586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
I1217 01:10:24.903346 1196586 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389537 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 715206a349d8b31a004fd41fd5c78471606af061fd173cd96b8de5a1473fd0e1
repoDigests:
- localhost/minikube-local-cache-test@sha256:316c382b77c13c05f55eef517beeb47341d622888bcbafa7e3b3b6314bae8169
repoTags:
- localhost/minikube-local-cache-test:functional-389537
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-389537
size: "4788229"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389537 image ls --format yaml --alsologtostderr:
I1217 01:10:20.748501 1196170 out.go:360] Setting OutFile to fd 1 ...
I1217 01:10:20.748628 1196170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:20.748634 1196170 out.go:374] Setting ErrFile to fd 2...
I1217 01:10:20.748639 1196170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:20.748924 1196170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:10:20.749566 1196170 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:20.749682 1196170 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:20.750212 1196170 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:10:20.780865 1196170 ssh_runner.go:195] Run: systemctl --version
I1217 01:10:20.780922 1196170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:10:20.804114 1196170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
I1217 01:10:20.903972 1196170 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh pgrep buildkitd: exit status 1 (279.445733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image build -t localhost/my-image:functional-389537 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-389537 image build -t localhost/my-image:functional-389537 testdata/build --alsologtostderr: (3.26547132s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389537 image build -t localhost/my-image:functional-389537 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 717ec684308
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-389537
--> b934229f329
Successfully tagged localhost/my-image:functional-389537
b934229f329e0c3978c1bc05b2378647b607f22ef2337f97a9ea9a89c95baa00
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389537 image build -t localhost/my-image:functional-389537 testdata/build --alsologtostderr:
I1217 01:10:21.274145 1196275 out.go:360] Setting OutFile to fd 1 ...
I1217 01:10:21.274359 1196275 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:21.274386 1196275 out.go:374] Setting ErrFile to fd 2...
I1217 01:10:21.274406 1196275 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 01:10:21.274682 1196275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
I1217 01:10:21.275324 1196275 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:21.276042 1196275 config.go:182] Loaded profile config "functional-389537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 01:10:21.276698 1196275 cli_runner.go:164] Run: docker container inspect functional-389537 --format={{.State.Status}}
I1217 01:10:21.293784 1196275 ssh_runner.go:195] Run: systemctl --version
I1217 01:10:21.293848 1196275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389537
I1217 01:10:21.314639 1196275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/functional-389537/id_rsa Username:docker}
I1217 01:10:21.406985 1196275 build_images.go:162] Building image from path: /tmp/build.3766308984.tar
I1217 01:10:21.407059 1196275 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 01:10:21.414838 1196275 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3766308984.tar
I1217 01:10:21.418592 1196275 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3766308984.tar: stat -c "%s %y" /var/lib/minikube/build/build.3766308984.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3766308984.tar': No such file or directory
I1217 01:10:21.418626 1196275 ssh_runner.go:362] scp /tmp/build.3766308984.tar --> /var/lib/minikube/build/build.3766308984.tar (3072 bytes)
I1217 01:10:21.436816 1196275 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3766308984
I1217 01:10:21.445172 1196275 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3766308984 -xf /var/lib/minikube/build/build.3766308984.tar
I1217 01:10:21.453056 1196275 crio.go:315] Building image: /var/lib/minikube/build/build.3766308984
I1217 01:10:21.453145 1196275 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-389537 /var/lib/minikube/build/build.3766308984 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1217 01:10:24.454619 1196275 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-389537 /var/lib/minikube/build/build.3766308984 --cgroup-manager=cgroupfs: (3.001445889s)
I1217 01:10:24.454683 1196275 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3766308984
I1217 01:10:24.464410 1196275 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3766308984.tar
I1217 01:10:24.473264 1196275 build_images.go:218] Built localhost/my-image:functional-389537 from /tmp/build.3766308984.tar
I1217 01:10:24.473305 1196275 build_images.go:134] succeeded building to: functional-389537
I1217 01:10:24.473311 1196275 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-389537
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image load --daemon kicbase/echo-server:functional-389537 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image load --daemon kicbase/echo-server:functional-389537 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-389537
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image load --daemon kicbase/echo-server:functional-389537 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image save kicbase/echo-server:functional-389537 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image rm kicbase/echo-server:functional-389537 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-389537
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 image save --daemon kicbase/echo-server:functional-389537 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-389537
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2981185060/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.473537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 01:08:22.428407 1136597 retry.go:31] will retry after 318.616049ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2981185060/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh "sudo umount -f /mount-9p": exit status 1 (253.950014ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-389537 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2981185060/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T" /mount1: exit status 1 (559.056443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 01:08:24.348741 1136597 retry.go:31] will retry after 570.430385ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389537 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-389537 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389537 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo527044023/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-389537 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "328.444043ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "60.425972ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "378.079665ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.619394ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-389537
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-389537
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-389537
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 01:13:07.479784 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:07.486140 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:07.497502 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:07.518856 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:07.560213 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:07.641601 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:07.803152 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:08.124871 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:08.766873 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:10.048216 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:12.609671 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:17.731864 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:27.973908 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:48.455904 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:14:29.418119 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:15:07.912997 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m18.265734057s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- rollout status deployment/busybox
E1217 01:15:51.340686 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 kubectl -- rollout status deployment/busybox: (4.022098422s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-d62f7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-fcp4p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-hw4rm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-d62f7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-fcp4p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-hw4rm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-d62f7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-fcp4p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-hw4rm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-d62f7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-d62f7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-fcp4p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-fcp4p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-hw4rm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 kubectl -- exec busybox-7b57f96db7-hw4rm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 node add --alsologtostderr -v 5
E1217 01:16:45.353793 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 node add --alsologtostderr -v 5: (58.297497515s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5: (1.034357438s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-202151 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.045472225s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 status --output json --alsologtostderr -v 5: (1.037341427s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp testdata/cp-test.txt ha-202151:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151_ha-202151-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test_ha-202151_ha-202151-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151_ha-202151-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test_ha-202151_ha-202151-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151:/home/docker/cp-test.txt ha-202151-m04:/home/docker/cp-test_ha-202151_ha-202151-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test_ha-202151_ha-202151-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp testdata/cp-test.txt ha-202151-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m02:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m02_ha-202151.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test_ha-202151-m02_ha-202151.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m02:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151-m02_ha-202151-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test_ha-202151-m02_ha-202151-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m02:/home/docker/cp-test.txt ha-202151-m04:/home/docker/cp-test_ha-202151-m02_ha-202151-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test_ha-202151-m02_ha-202151-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp testdata/cp-test.txt ha-202151-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m03_ha-202151.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m03_ha-202151-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m03:/home/docker/cp-test.txt ha-202151-m04:/home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test_ha-202151-m03_ha-202151-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp testdata/cp-test.txt ha-202151-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4004201784/001/cp-test_ha-202151-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151:/home/docker/cp-test_ha-202151-m04_ha-202151.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151 "sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m02:/home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m02 "sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 cp ha-202151-m04:/home/docker/cp-test.txt ha-202151-m03:/home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 ssh -n ha-202151-m03 "sudo cat /home/docker/cp-test_ha-202151-m04_ha-202151-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 node stop m02 --alsologtostderr -v 5: (12.105461464s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5: exit status 7 (761.969715ms)

                                                
                                                
-- stdout --
	ha-202151
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-202151-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-202151-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-202151-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:17:30.211272 1212572 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:17:30.211457 1212572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:30.211469 1212572 out.go:374] Setting ErrFile to fd 2...
	I1217 01:17:30.211474 1212572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:30.211725 1212572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:17:30.211994 1212572 out.go:368] Setting JSON to false
	I1217 01:17:30.212035 1212572 mustload.go:66] Loading cluster: ha-202151
	I1217 01:17:30.212121 1212572 notify.go:221] Checking for updates...
	I1217 01:17:30.212524 1212572 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:17:30.212553 1212572 status.go:174] checking status of ha-202151 ...
	I1217 01:17:30.213244 1212572 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:17:30.238970 1212572 status.go:371] ha-202151 host status = "Running" (err=<nil>)
	I1217 01:17:30.238994 1212572 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:17:30.239431 1212572 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151
	I1217 01:17:30.266818 1212572 host.go:66] Checking if "ha-202151" exists ...
	I1217 01:17:30.267131 1212572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:17:30.267199 1212572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151
	I1217 01:17:30.286602 1212572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151/id_rsa Username:docker}
	I1217 01:17:30.386719 1212572 ssh_runner.go:195] Run: systemctl --version
	I1217 01:17:30.394159 1212572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:17:30.409459 1212572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:17:30.466203 1212572 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-17 01:17:30.456993545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:17:30.466766 1212572 kubeconfig.go:125] found "ha-202151" server: "https://192.168.49.254:8443"
	I1217 01:17:30.466807 1212572 api_server.go:166] Checking apiserver status ...
	I1217 01:17:30.466856 1212572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:17:30.479320 1212572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1242/cgroup
	I1217 01:17:30.487901 1212572 api_server.go:182] apiserver freezer: "4:freezer:/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio/crio-3b62247239d54a0df909adfa213036293649bc4e77fc2ca825b350ee0dca8e46"
	I1217 01:17:30.487968 1212572 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0d1af93acb20f9d9607e4b5c6e9bcad89e3e75abd0bbb5f03366a306abe2519d/crio/crio-3b62247239d54a0df909adfa213036293649bc4e77fc2ca825b350ee0dca8e46/freezer.state
	I1217 01:17:30.495639 1212572 api_server.go:204] freezer state: "THAWED"
	I1217 01:17:30.495666 1212572 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 01:17:30.504017 1212572 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 01:17:30.504049 1212572 status.go:463] ha-202151 apiserver status = Running (err=<nil>)
	I1217 01:17:30.504060 1212572 status.go:176] ha-202151 status: &{Name:ha-202151 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:17:30.504119 1212572 status.go:174] checking status of ha-202151-m02 ...
	I1217 01:17:30.504635 1212572 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:17:30.522867 1212572 status.go:371] ha-202151-m02 host status = "Stopped" (err=<nil>)
	I1217 01:17:30.522892 1212572 status.go:384] host is not running, skipping remaining checks
	I1217 01:17:30.522906 1212572 status.go:176] ha-202151-m02 status: &{Name:ha-202151-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:17:30.522927 1212572 status.go:174] checking status of ha-202151-m03 ...
	I1217 01:17:30.523244 1212572 cli_runner.go:164] Run: docker container inspect ha-202151-m03 --format={{.State.Status}}
	I1217 01:17:30.540722 1212572 status.go:371] ha-202151-m03 host status = "Running" (err=<nil>)
	I1217 01:17:30.540748 1212572 host.go:66] Checking if "ha-202151-m03" exists ...
	I1217 01:17:30.541038 1212572 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m03
	I1217 01:17:30.558330 1212572 host.go:66] Checking if "ha-202151-m03" exists ...
	I1217 01:17:30.558777 1212572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:17:30.558828 1212572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m03
	I1217 01:17:30.576857 1212572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m03/id_rsa Username:docker}
	I1217 01:17:30.674005 1212572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:17:30.688957 1212572 kubeconfig.go:125] found "ha-202151" server: "https://192.168.49.254:8443"
	I1217 01:17:30.688990 1212572 api_server.go:166] Checking apiserver status ...
	I1217 01:17:30.689059 1212572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:17:30.701624 1212572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I1217 01:17:30.710819 1212572 api_server.go:182] apiserver freezer: "4:freezer:/docker/4a2bfe6401553f7c15b879b30af491a370b1ce594126b95450980bdfde4d6caa/crio/crio-e94e4941f3a4b9d4643bbff4b9a11da2ba147d4e430cac1167026a7127248aa6"
	I1217 01:17:30.710911 1212572 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4a2bfe6401553f7c15b879b30af491a370b1ce594126b95450980bdfde4d6caa/crio/crio-e94e4941f3a4b9d4643bbff4b9a11da2ba147d4e430cac1167026a7127248aa6/freezer.state
	I1217 01:17:30.719169 1212572 api_server.go:204] freezer state: "THAWED"
	I1217 01:17:30.719204 1212572 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 01:17:30.727613 1212572 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 01:17:30.727644 1212572 status.go:463] ha-202151-m03 apiserver status = Running (err=<nil>)
	I1217 01:17:30.727654 1212572 status.go:176] ha-202151-m03 status: &{Name:ha-202151-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:17:30.727694 1212572 status.go:174] checking status of ha-202151-m04 ...
	I1217 01:17:30.728015 1212572 cli_runner.go:164] Run: docker container inspect ha-202151-m04 --format={{.State.Status}}
	I1217 01:17:30.746270 1212572 status.go:371] ha-202151-m04 host status = "Running" (err=<nil>)
	I1217 01:17:30.746326 1212572 host.go:66] Checking if "ha-202151-m04" exists ...
	I1217 01:17:30.746734 1212572 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-202151-m04
	I1217 01:17:30.775028 1212572 host.go:66] Checking if "ha-202151-m04" exists ...
	I1217 01:17:30.775351 1212572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:17:30.775404 1212572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-202151-m04
	I1217 01:17:30.796070 1212572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/ha-202151-m04/id_rsa Username:docker}
	I1217 01:17:30.889944 1212572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:17:30.910945 1212572 status.go:176] ha-202151-m04 status: &{Name:ha-202151-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.028591133s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 stop --alsologtostderr -v 5: (37.544757329s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 start --wait true --alsologtostderr -v 5
E1217 01:26:28.434289 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:26:45.353929 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 start --wait true --alsologtostderr -v 5: (1m39.595648499s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 node delete m03 --alsologtostderr -v 5: (11.194714697s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 stop --alsologtostderr -v 5
E1217 01:28:07.479753 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-202151 stop --alsologtostderr -v 5: (36.104500587s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-202151 status --alsologtostderr -v 5: exit status 7 (113.957089ms)

                                                
                                                
-- stdout --
	ha-202151
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-202151-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-202151-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:28:23.842545 1225651 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:28:23.842762 1225651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.842795 1225651 out.go:374] Setting ErrFile to fd 2...
	I1217 01:28:23.842820 1225651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:23.843093 1225651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:28:23.843303 1225651 out.go:368] Setting JSON to false
	I1217 01:28:23.843359 1225651 mustload.go:66] Loading cluster: ha-202151
	I1217 01:28:23.843455 1225651 notify.go:221] Checking for updates...
	I1217 01:28:23.843833 1225651 config.go:182] Loaded profile config "ha-202151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:23.843882 1225651 status.go:174] checking status of ha-202151 ...
	I1217 01:28:23.844517 1225651 cli_runner.go:164] Run: docker container inspect ha-202151 --format={{.State.Status}}
	I1217 01:28:23.863977 1225651 status.go:371] ha-202151 host status = "Stopped" (err=<nil>)
	I1217 01:28:23.863997 1225651 status.go:384] host is not running, skipping remaining checks
	I1217 01:28:23.864005 1225651 status.go:176] ha-202151 status: &{Name:ha-202151 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:28:23.864034 1225651 status.go:174] checking status of ha-202151-m02 ...
	I1217 01:28:23.864346 1225651 cli_runner.go:164] Run: docker container inspect ha-202151-m02 --format={{.State.Status}}
	I1217 01:28:23.886853 1225651 status.go:371] ha-202151-m02 host status = "Stopped" (err=<nil>)
	I1217 01:28:23.886874 1225651 status.go:384] host is not running, skipping remaining checks
	I1217 01:28:23.886888 1225651 status.go:176] ha-202151-m02 status: &{Name:ha-202151-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:28:23.886907 1225651 status.go:174] checking status of ha-202151-m04 ...
	I1217 01:28:23.887204 1225651 cli_runner.go:164] Run: docker container inspect ha-202151-m04 --format={{.State.Status}}
	I1217 01:28:23.905064 1225651 status.go:371] ha-202151-m04 host status = "Stopped" (err=<nil>)
	I1217 01:28:23.905086 1225651 status.go:384] host is not running, skipping remaining checks
	I1217 01:28:23.905094 1225651 status.go:176] ha-202151-m04 status: &{Name:ha-202151-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-639976 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-639976 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.959418257s)
--- PASS: TestJSONOutput/start/Command (78.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-639976 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-639976 --output=json --user=testUser: (5.966180055s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-504318 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-504318 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (106.301115ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bb1cff4d-69be-4fdf-bb72-f27ef770c070","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-504318] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"252b7386-0fd7-4c02-afc2-ad1b0a491b3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"d94eaeea-1cbe-4dc5-8ae1-2291e81572b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"28a2ae4f-5c1a-49e6-a581-67efcc270482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig"}}
	{"specversion":"1.0","id":"311b30ca-43cd-4875-9434-16b1bad0262f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube"}}
	{"specversion":"1.0","id":"a0afd534-b58e-455a-bc5c-ab4723100fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a81bf900-15e6-4390-a855-399e3f749f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9355006e-f6de-42ae-b1af-3ed914320f20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-504318" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-504318
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-582300 --network=
E1217 01:40:07.912433 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-582300 --network=: (36.840378061s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-582300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-582300
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-582300: (2.19656812s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-108379 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-108379 --network=bridge: (34.347779513s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-108379" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-108379
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-108379: (2.195866393s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.57s)

                                                
                                    
x
+
TestKicExistingNetwork (34.07s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 01:41:03.170315 1136597 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 01:41:03.189832 1136597 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 01:41:03.189913 1136597 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 01:41:03.189931 1136597 cli_runner.go:164] Run: docker network inspect existing-network
W1217 01:41:03.206461 1136597 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 01:41:03.206490 1136597 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 01:41:03.206505 1136597 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 01:41:03.206626 1136597 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 01:41:03.224927 1136597 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e224ccab4890 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:8b:ae:d4:d3:20} reservation:<nil>}
I1217 01:41:03.225294 1136597 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40021672b0}
I1217 01:41:03.225326 1136597 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 01:41:03.225393 1136597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 01:41:03.287695 1136597 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-720027 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-720027 --network=existing-network: (31.780665291s)
helpers_test.go:176: Cleaning up "existing-network-720027" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-720027
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-720027: (2.141065695s)
I1217 01:41:37.228000 1136597 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.07s)

                                                
                                    
x
+
TestKicCustomSubnet (35.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-967656 --subnet=192.168.60.0/24
E1217 01:41:45.357724 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-967656 --subnet=192.168.60.0/24: (32.969304483s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-967656 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-967656" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-967656
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-967656: (2.183834006s)
--- PASS: TestKicCustomSubnet (35.19s)

                                                
                                    
x
+
TestKicStaticIP (35.5s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-981065 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-981065 --static-ip=192.168.200.200: (33.120010332s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-981065 ip
helpers_test.go:176: Cleaning up "static-ip-981065" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-981065
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-981065: (2.211931567s)
--- PASS: TestKicStaticIP (35.50s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (78.05s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-890908 --driver=docker  --container-runtime=crio
E1217 01:43:07.480207 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:43:08.436872 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-890908 --driver=docker  --container-runtime=crio: (32.57286369s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-893616 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-893616 --driver=docker  --container-runtime=crio: (39.818975969s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-890908
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-893616
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-893616" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-893616
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-893616: (2.076172386s)
helpers_test.go:176: Cleaning up "first-890908" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-890908
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-890908: (2.086907247s)
--- PASS: TestMinikubeProfile (78.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-460689 --memory=3072 --mount-string /tmp/TestMountStartserial909572856/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-460689 --memory=3072 --mount-string /tmp/TestMountStartserial909572856/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.961115659s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-460689 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462305 --memory=3072 --mount-string /tmp/TestMountStartserial909572856/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462305 --memory=3072 --mount-string /tmp/TestMountStartserial909572856/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.871377992s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462305 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-460689 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-460689 --alsologtostderr -v=5: (1.722593622s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462305 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-462305
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-462305: (1.291833013s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462305
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462305: (6.734389147s)
--- PASS: TestMountStart/serial/RestartStopped (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462305 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-410322 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 01:45:07.911503 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:46:10.549413 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:46:45.354148 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-410322 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.380466727s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-410322 -- rollout status deployment/busybox: (3.503103501s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-5xnlv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-cj8wl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-5xnlv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-cj8wl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-5xnlv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-cj8wl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-5xnlv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-5xnlv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-cj8wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-410322 -- exec busybox-7b57f96db7-cj8wl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-410322 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-410322 -v=5 --alsologtostderr: (56.703476648s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-410322 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp testdata/cp-test.txt multinode-410322:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2158722926/001/cp-test_multinode-410322.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322:/home/docker/cp-test.txt multinode-410322-m02:/home/docker/cp-test_multinode-410322_multinode-410322-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m02 "sudo cat /home/docker/cp-test_multinode-410322_multinode-410322-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322:/home/docker/cp-test.txt multinode-410322-m03:/home/docker/cp-test_multinode-410322_multinode-410322-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m03 "sudo cat /home/docker/cp-test_multinode-410322_multinode-410322-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp testdata/cp-test.txt multinode-410322-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2158722926/001/cp-test_multinode-410322-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322-m02:/home/docker/cp-test.txt multinode-410322:/home/docker/cp-test_multinode-410322-m02_multinode-410322.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322 "sudo cat /home/docker/cp-test_multinode-410322-m02_multinode-410322.txt"
E1217 01:48:07.479704 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322-m02:/home/docker/cp-test.txt multinode-410322-m03:/home/docker/cp-test_multinode-410322-m02_multinode-410322-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m03 "sudo cat /home/docker/cp-test_multinode-410322-m02_multinode-410322-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp testdata/cp-test.txt multinode-410322-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2158722926/001/cp-test_multinode-410322-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322-m03:/home/docker/cp-test.txt multinode-410322:/home/docker/cp-test_multinode-410322-m03_multinode-410322.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322 "sudo cat /home/docker/cp-test_multinode-410322-m03_multinode-410322.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 cp multinode-410322-m03:/home/docker/cp-test.txt multinode-410322-m02:/home/docker/cp-test_multinode-410322-m03_multinode-410322-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 ssh -n multinode-410322-m02 "sudo cat /home/docker/cp-test_multinode-410322-m03_multinode-410322-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-410322 node stop m03: (1.335131406s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-410322 status: exit status 7 (542.710493ms)

                                                
                                                
-- stdout --
	multinode-410322
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-410322-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-410322-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr: exit status 7 (535.299833ms)

                                                
                                                
-- stdout --
	multinode-410322
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-410322-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-410322-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:48:13.783146 1288683 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:48:13.783263 1288683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:48:13.783274 1288683 out.go:374] Setting ErrFile to fd 2...
	I1217 01:48:13.783278 1288683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:48:13.783513 1288683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:48:13.783692 1288683 out.go:368] Setting JSON to false
	I1217 01:48:13.783759 1288683 mustload.go:66] Loading cluster: multinode-410322
	I1217 01:48:13.783840 1288683 notify.go:221] Checking for updates...
	I1217 01:48:13.784261 1288683 config.go:182] Loaded profile config "multinode-410322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:48:13.784300 1288683 status.go:174] checking status of multinode-410322 ...
	I1217 01:48:13.785199 1288683 cli_runner.go:164] Run: docker container inspect multinode-410322 --format={{.State.Status}}
	I1217 01:48:13.805075 1288683 status.go:371] multinode-410322 host status = "Running" (err=<nil>)
	I1217 01:48:13.805100 1288683 host.go:66] Checking if "multinode-410322" exists ...
	I1217 01:48:13.805402 1288683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-410322
	I1217 01:48:13.828224 1288683 host.go:66] Checking if "multinode-410322" exists ...
	I1217 01:48:13.828566 1288683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:48:13.828616 1288683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-410322
	I1217 01:48:13.850008 1288683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/multinode-410322/id_rsa Username:docker}
	I1217 01:48:13.946375 1288683 ssh_runner.go:195] Run: systemctl --version
	I1217 01:48:13.952870 1288683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:48:13.965937 1288683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:48:14.028820 1288683 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-17 01:48:14.018434569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1217 01:48:14.029383 1288683 kubeconfig.go:125] found "multinode-410322" server: "https://192.168.67.2:8443"
	I1217 01:48:14.029430 1288683 api_server.go:166] Checking apiserver status ...
	I1217 01:48:14.029485 1288683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:48:14.041694 1288683 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	I1217 01:48:14.050804 1288683 api_server.go:182] apiserver freezer: "4:freezer:/docker/0222d089558af91a08c7a9d43dba4ffd3cdae03760ebc622e4a7204d892e701c/crio/crio-eca0a04023ccbd132fcb6c47407c13b3fc70eb940e7dd54360a4a66991dd5ee9"
	I1217 01:48:14.050881 1288683 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0222d089558af91a08c7a9d43dba4ffd3cdae03760ebc622e4a7204d892e701c/crio/crio-eca0a04023ccbd132fcb6c47407c13b3fc70eb940e7dd54360a4a66991dd5ee9/freezer.state
	I1217 01:48:14.058856 1288683 api_server.go:204] freezer state: "THAWED"
	I1217 01:48:14.058889 1288683 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 01:48:14.067200 1288683 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 01:48:14.067233 1288683 status.go:463] multinode-410322 apiserver status = Running (err=<nil>)
	I1217 01:48:14.067246 1288683 status.go:176] multinode-410322 status: &{Name:multinode-410322 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:48:14.067262 1288683 status.go:174] checking status of multinode-410322-m02 ...
	I1217 01:48:14.067582 1288683 cli_runner.go:164] Run: docker container inspect multinode-410322-m02 --format={{.State.Status}}
	I1217 01:48:14.086345 1288683 status.go:371] multinode-410322-m02 host status = "Running" (err=<nil>)
	I1217 01:48:14.086371 1288683 host.go:66] Checking if "multinode-410322-m02" exists ...
	I1217 01:48:14.086703 1288683 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-410322-m02
	I1217 01:48:14.105147 1288683 host.go:66] Checking if "multinode-410322-m02" exists ...
	I1217 01:48:14.105461 1288683 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:48:14.105511 1288683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-410322-m02
	I1217 01:48:14.129973 1288683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/22168-1134739/.minikube/machines/multinode-410322-m02/id_rsa Username:docker}
	I1217 01:48:14.226065 1288683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:48:14.239338 1288683 status.go:176] multinode-410322-m02 status: &{Name:multinode-410322-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:48:14.239381 1288683 status.go:174] checking status of multinode-410322-m03 ...
	I1217 01:48:14.239699 1288683 cli_runner.go:164] Run: docker container inspect multinode-410322-m03 --format={{.State.Status}}
	I1217 01:48:14.261399 1288683 status.go:371] multinode-410322-m03 host status = "Stopped" (err=<nil>)
	I1217 01:48:14.261425 1288683 status.go:384] host is not running, skipping remaining checks
	I1217 01:48:14.261433 1288683 status.go:176] multinode-410322-m03 status: &{Name:multinode-410322-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-410322 node start m03 -v=5 --alsologtostderr: (7.326559779s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-410322
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-410322
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-410322: (25.067267944s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-410322 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-410322 --wait=true -v=5 --alsologtostderr: (47.228069023s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-410322
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-410322 node delete m03: (4.998553972s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-410322 stop: (23.863935295s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-410322 status: exit status 7 (104.299745ms)

                                                
                                                
-- stdout --
	multinode-410322
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-410322-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr: exit status 7 (107.509063ms)

                                                
                                                
-- stdout --
	multinode-410322
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-410322-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:50:04.524986 1296531 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:50:04.525158 1296531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:50:04.525183 1296531 out.go:374] Setting ErrFile to fd 2...
	I1217 01:50:04.525202 1296531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:50:04.525581 1296531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:50:04.525852 1296531 out.go:368] Setting JSON to false
	I1217 01:50:04.525906 1296531 mustload.go:66] Loading cluster: multinode-410322
	I1217 01:50:04.526618 1296531 config.go:182] Loaded profile config "multinode-410322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:50:04.527030 1296531 notify.go:221] Checking for updates...
	I1217 01:50:04.527079 1296531 status.go:174] checking status of multinode-410322 ...
	I1217 01:50:04.527725 1296531 cli_runner.go:164] Run: docker container inspect multinode-410322 --format={{.State.Status}}
	I1217 01:50:04.548797 1296531 status.go:371] multinode-410322 host status = "Stopped" (err=<nil>)
	I1217 01:50:04.548820 1296531 status.go:384] host is not running, skipping remaining checks
	I1217 01:50:04.548829 1296531 status.go:176] multinode-410322 status: &{Name:multinode-410322 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:50:04.548857 1296531 status.go:174] checking status of multinode-410322-m02 ...
	I1217 01:50:04.549154 1296531 cli_runner.go:164] Run: docker container inspect multinode-410322-m02 --format={{.State.Status}}
	I1217 01:50:04.576513 1296531 status.go:371] multinode-410322-m02 host status = "Stopped" (err=<nil>)
	I1217 01:50:04.576539 1296531 status.go:384] host is not running, skipping remaining checks
	I1217 01:50:04.576547 1296531 status.go:176] multinode-410322-m02 status: &{Name:multinode-410322-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-410322 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 01:50:07.912330 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-410322 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.469296119s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-410322 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-410322
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-410322-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-410322-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.040442ms)

                                                
                                                
-- stdout --
	* [multinode-410322-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-410322-m02' is duplicated with machine name 'multinode-410322-m02' in profile 'multinode-410322'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-410322-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-410322-m03 --driver=docker  --container-runtime=crio: (31.547465692s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-410322
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-410322: exit status 80 (367.005166ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-410322 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-410322-m03 already exists in multinode-410322-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-410322-m03
E1217 01:51:30.985840 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-410322-m03: (2.115651491s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.18s)

                                                
                                    
x
+
TestPreload (150.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-676270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1217 01:51:45.354061 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.479726 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-676270 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m32.556323275s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-676270 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-676270 image pull gcr.io/k8s-minikube/busybox: (2.031439451s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-676270
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-676270: (5.979917376s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-676270 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-676270 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.381119168s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-676270 image list
helpers_test.go:176: Cleaning up "test-preload-676270" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-676270
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-676270: (2.409472546s)
--- PASS: TestPreload (150.60s)

                                                
                                    
x
+
TestScheduledStopUnix (110.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-594078 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-594078 --memory=3072 --driver=docker  --container-runtime=crio: (33.602898332s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594078 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:54:41.560455 1310578 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:54:41.560627 1310578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:54:41.560638 1310578 out.go:374] Setting ErrFile to fd 2...
	I1217 01:54:41.560643 1310578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:54:41.560989 1310578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:54:41.561285 1310578 out.go:368] Setting JSON to false
	I1217 01:54:41.561411 1310578 mustload.go:66] Loading cluster: scheduled-stop-594078
	I1217 01:54:41.561794 1310578 config.go:182] Loaded profile config "scheduled-stop-594078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:54:41.561871 1310578 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/config.json ...
	I1217 01:54:41.562056 1310578 mustload.go:66] Loading cluster: scheduled-stop-594078
	I1217 01:54:41.562179 1310578 config.go:182] Loaded profile config "scheduled-stop-594078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-594078 -n scheduled-stop-594078
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594078 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:54:42.001801 1310665 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:54:42.001996 1310665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:54:42.002004 1310665 out.go:374] Setting ErrFile to fd 2...
	I1217 01:54:42.002009 1310665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:54:42.002292 1310665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:54:42.002594 1310665 out.go:368] Setting JSON to false
	I1217 01:54:42.012729 1310665 daemonize_unix.go:73] killing process 1310599 as it is an old scheduled stop
	I1217 01:54:42.012982 1310665 mustload.go:66] Loading cluster: scheduled-stop-594078
	I1217 01:54:42.013626 1310665 config.go:182] Loaded profile config "scheduled-stop-594078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:54:42.014095 1310665 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/config.json ...
	I1217 01:54:42.014492 1310665 mustload.go:66] Loading cluster: scheduled-stop-594078
	I1217 01:54:42.014736 1310665 config.go:182] Loaded profile config "scheduled-stop-594078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 01:54:42.027457 1136597 retry.go:31] will retry after 122.674µs: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.028663 1136597 retry.go:31] will retry after 141.723µs: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.029839 1136597 retry.go:31] will retry after 168.455µs: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.031776 1136597 retry.go:31] will retry after 257.324µs: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.032943 1136597 retry.go:31] will retry after 757.833µs: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.034082 1136597 retry.go:31] will retry after 432.815µs: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.035238 1136597 retry.go:31] will retry after 1.009129ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.036529 1136597 retry.go:31] will retry after 2.419886ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.039870 1136597 retry.go:31] will retry after 3.292068ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.044179 1136597 retry.go:31] will retry after 5.214258ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.050445 1136597 retry.go:31] will retry after 8.18263ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.058831 1136597 retry.go:31] will retry after 12.75563ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.072469 1136597 retry.go:31] will retry after 13.516962ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.087103 1136597 retry.go:31] will retry after 18.261637ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
I1217 01:54:42.106486 1136597 retry.go:31] will retry after 42.002828ms: open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594078 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-594078 -n scheduled-stop-594078
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-594078
E1217 01:55:07.912403 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-594078 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:55:07.999467 1311032 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:55:07.999665 1311032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:07.999687 1311032 out.go:374] Setting ErrFile to fd 2...
	I1217 01:55:07.999716 1311032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:55:08.000945 1311032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1134739/.minikube/bin
	I1217 01:55:08.001461 1311032 out.go:368] Setting JSON to false
	I1217 01:55:08.001641 1311032 mustload.go:66] Loading cluster: scheduled-stop-594078
	I1217 01:55:08.002117 1311032 config.go:182] Loaded profile config "scheduled-stop-594078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:55:08.002251 1311032 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/scheduled-stop-594078/config.json ...
	I1217 01:55:08.002539 1311032 mustload.go:66] Loading cluster: scheduled-stop-594078
	I1217 01:55:08.002733 1311032 config.go:182] Loaded profile config "scheduled-stop-594078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-594078
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-594078: exit status 7 (71.068392ms)

                                                
                                                
-- stdout --
	scheduled-stop-594078
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-594078 -n scheduled-stop-594078
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-594078 -n scheduled-stop-594078: exit status 7 (73.325584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-594078" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-594078
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-594078: (5.205129711s)
--- PASS: TestScheduledStopUnix (110.47s)

                                                
                                    
x
+
TestInsufficientStorage (13.07s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-776672 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-776672 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.474601067s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6cac3bef-92d6-4474-95f9-76d1ca4b67bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-776672] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd6edbed-1c66-417d-8ffa-4e4c10033ef1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"08450e41-f23c-4bcc-992a-2763c63e04aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"674c2fe0-c1ae-48d2-8f63-0278bba17184","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig"}}
	{"specversion":"1.0","id":"3943ed8b-9ffc-43e4-a059-24fc623555be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube"}}
	{"specversion":"1.0","id":"264d2330-1205-4f62-ba68-4d4db815f113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dfd04837-6303-4714-b547-3a1b6b21520c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"30244861-ada2-4c90-8fd8-6138eb00ce78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f454d43e-c9f0-4a2f-ba9e-e056c92f1971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3f65170b-7788-4d7c-82d7-1f6ba17523ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9e03e91-bcfa-4e9f-8c9c-0306b670b9ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ded896c1-204d-485f-823e-fefd58c1b340","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-776672\" primary control-plane node in \"insufficient-storage-776672\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb69c0cd-05a5-4713-9db0-6b869eabec67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"946e263f-1951-4e6d-b3aa-d7fb931fb43d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba2b255b-1b74-43af-bf02-a58233596d6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-776672 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-776672 --output=json --layout=cluster: exit status 7 (306.025722ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-776672","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-776672","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 01:56:09.135443 1312740 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-776672" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-776672 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-776672 --output=json --layout=cluster: exit status 7 (304.121974ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-776672","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-776672","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 01:56:09.440600 1312806 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-776672" does not appear in /home/jenkins/minikube-integration/22168-1134739/kubeconfig
	E1217 01:56:09.451609 1312806 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/insufficient-storage-776672/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-776672" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-776672
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-776672: (1.981678288s)
--- PASS: TestInsufficientStorage (13.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (298.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2618036582 start -p running-upgrade-842996 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2618036582 start -p running-upgrade-842996 --memory=3072 --vm-driver=docker  --container-runtime=crio: (30.543964702s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-842996 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 02:05:07.912556 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:45.353478 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:08:07.479973 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:08:10.987317 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-842996 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.36706558s)
helpers_test.go:176: Cleaning up "running-upgrade-842996" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-842996
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-842996: (2.03021845s)
--- PASS: TestRunningBinaryUpgrade (298.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2719721274 start -p missing-upgrade-935345 --memory=3072 --driver=docker  --container-runtime=crio
E1217 01:56:45.354140 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2719721274 start -p missing-upgrade-935345 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.803314669s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-935345
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-935345
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-935345 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-935345 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.025381228s)
helpers_test.go:176: Cleaning up "missing-upgrade-935345" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-935345
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-935345: (2.471671001s)
--- PASS: TestMissingContainerUpgrade (115.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262920 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-262920 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (94.573217ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-262920] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-1134739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1134739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262920 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262920 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.780958192s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-262920 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262920 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262920 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.86911313s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-262920 status -o json
no_kubernetes_test.go:226: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-262920 status -o json: exit status 2 (591.320506ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-262920","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-262920
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-262920: (2.411690491s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:162: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262920 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:162: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262920 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.988581532s)
--- PASS: TestNoKubernetes/serial/Start (7.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22168-1134739/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-262920 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-262920 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.313572ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:195: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:205: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:184: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-262920
no_kubernetes_test.go:184: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-262920: (1.294517264s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:217: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-262920 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:217: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-262920 --driver=docker  --container-runtime=crio: (9.85292438s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-262920 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-262920 "sudo systemctl is-active --quiet service kubelet": exit status 1 (553.358904ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E1217 01:58:07.479520 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStoppedBinaryUpgrade/Setup (1.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (321.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2813854949 start -p stopped-upgrade-925123 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2813854949 start -p stopped-upgrade-925123 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.819658401s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2813854949 -p stopped-upgrade-925123 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2813854949 -p stopped-upgrade-925123 stop: (1.276162134s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-925123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 01:59:48.438658 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:00:07.912323 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:01:45.354110 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/addons-219291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:02:50.550949 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:03:07.479451 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-389537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-925123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.37249237s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (321.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-925123
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-925123: (1.799587684s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.80s)

                                                
                                    
x
+
TestPause/serial/Start (84.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-666844 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-666844 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.425166847s)
--- PASS: TestPause/serial/Start (84.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-666844 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1217 02:10:07.911842 1136597 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1134739/.minikube/profiles/functional-099267/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-666844 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.073800848s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.09s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-970516 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-970516" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-970516
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard